00:00:00.000 Started by upstream project "autotest-per-patch" build number 132049 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.054 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.057 The recommended git tool is: git 00:00:00.057 using credential 00000000-0000-0000-0000-000000000002 00:00:00.058 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.081 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.123 Using shallow fetch with depth 1 00:00:00.123 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.123 > git --version # timeout=10 00:00:00.189 > git --version # 'git version 2.39.2' 00:00:00.190 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.014 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.026 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.042 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:04.042 > git config core.sparsecheckout # timeout=10 00:00:04.053 > git read-tree -mu HEAD # timeout=10 00:00:04.070 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:04.091 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:04.091 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:04.196 [Pipeline] Start of Pipeline 00:00:04.210 [Pipeline] library 00:00:04.212 Loading library shm_lib@master 00:00:04.212 Library shm_lib@master is cached. Copying from home. 00:00:04.229 [Pipeline] node 00:00:04.237 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.238 [Pipeline] { 00:00:04.249 [Pipeline] catchError 00:00:04.251 [Pipeline] { 00:00:04.266 [Pipeline] wrap 00:00:04.275 [Pipeline] { 00:00:04.284 [Pipeline] stage 00:00:04.286 [Pipeline] { (Prologue) 00:00:04.506 [Pipeline] sh 00:00:04.792 + logger -p user.info -t JENKINS-CI 00:00:04.805 [Pipeline] echo 00:00:04.806 Node: CYP9 00:00:04.813 [Pipeline] sh 00:00:05.114 [Pipeline] setCustomBuildProperty 00:00:05.126 [Pipeline] echo 00:00:05.128 Cleanup processes 00:00:05.133 [Pipeline] sh 00:00:05.420 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.420 2664167 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.432 [Pipeline] sh 00:00:05.721 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.721 ++ grep -v 'sudo pgrep' 00:00:05.721 ++ awk '{print $1}' 00:00:05.721 + sudo kill -9 00:00:05.721 + true 00:00:05.734 [Pipeline] cleanWs 00:00:05.744 [WS-CLEANUP] Deleting project workspace... 00:00:05.744 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.751 [WS-CLEANUP] done 00:00:05.755 [Pipeline] setCustomBuildProperty 00:00:05.769 [Pipeline] sh 00:00:06.053 + sudo git config --global --replace-all safe.directory '*' 00:00:06.170 [Pipeline] httpRequest 00:00:06.772 [Pipeline] echo 00:00:06.773 Sorcerer 10.211.164.101 is alive 00:00:06.782 [Pipeline] retry 00:00:06.784 [Pipeline] { 00:00:06.798 [Pipeline] httpRequest 00:00:06.802 HttpMethod: GET 00:00:06.802 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:06.803 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:06.817 Response Code: HTTP/1.1 200 OK 00:00:06.817 Success: Status code 200 is in the accepted range: 200,404 00:00:06.817 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:11.874 [Pipeline] } 00:00:11.892 [Pipeline] // retry 00:00:11.900 [Pipeline] sh 00:00:12.187 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:12.204 [Pipeline] httpRequest 00:00:13.746 [Pipeline] echo 00:00:13.748 Sorcerer 10.211.164.101 is alive 00:00:13.757 [Pipeline] retry 00:00:13.759 [Pipeline] { 00:00:13.774 [Pipeline] httpRequest 00:00:13.779 HttpMethod: GET 00:00:13.779 URL: http://10.211.164.101/packages/spdk_d0fd7ad5907741a94c735f38298ee315e9d58ae5.tar.gz 00:00:13.780 Sending request to url: http://10.211.164.101/packages/spdk_d0fd7ad5907741a94c735f38298ee315e9d58ae5.tar.gz 00:00:13.786 Response Code: HTTP/1.1 200 OK 00:00:13.786 Success: Status code 200 is in the accepted range: 200,404 00:00:13.787 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_d0fd7ad5907741a94c735f38298ee315e9d58ae5.tar.gz 00:01:25.839 [Pipeline] } 00:01:25.856 [Pipeline] // retry 00:01:25.863 [Pipeline] sh 00:01:26.152 + tar --no-same-owner -xf spdk_d0fd7ad5907741a94c735f38298ee315e9d58ae5.tar.gz 00:01:28.712 [Pipeline] sh 00:01:29.000 + git -C spdk log --oneline -n5 00:01:29.000 d0fd7ad59 lib/reduce: Add a chunk data read/write cache 00:01:29.000 fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:01:29.000 12fc2abf1 test: Remove autopackage.sh 00:01:29.000 83ba90867 fio/bdev: fix typo in README 00:01:29.000 45379ed84 module/compress: Cleanup vol data, when claim fails 00:01:29.012 [Pipeline] } 00:01:29.025 [Pipeline] // stage 00:01:29.034 [Pipeline] stage 00:01:29.037 [Pipeline] { (Prepare) 00:01:29.053 [Pipeline] writeFile 00:01:29.069 [Pipeline] sh 00:01:29.358 + logger -p user.info -t JENKINS-CI 00:01:29.372 [Pipeline] sh 00:01:29.660 + logger -p user.info -t JENKINS-CI 00:01:29.674 [Pipeline] sh 00:01:29.963 + cat autorun-spdk.conf 00:01:29.964 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:29.964 SPDK_TEST_NVMF=1 00:01:29.964 SPDK_TEST_NVME_CLI=1 00:01:29.964 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:29.964 SPDK_TEST_NVMF_NICS=e810 00:01:29.964 SPDK_TEST_VFIOUSER=1 00:01:29.964 SPDK_RUN_UBSAN=1 00:01:29.964 NET_TYPE=phy 00:01:29.972 RUN_NIGHTLY=0 00:01:29.977 [Pipeline] readFile 00:01:30.002 [Pipeline] withEnv 00:01:30.004 [Pipeline] { 00:01:30.017 [Pipeline] sh 00:01:30.306 + set -ex 00:01:30.306 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:30.306 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:30.306 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.306 ++ SPDK_TEST_NVMF=1 00:01:30.306 ++ SPDK_TEST_NVME_CLI=1 00:01:30.306 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:30.306 ++ SPDK_TEST_NVMF_NICS=e810 00:01:30.306 ++ SPDK_TEST_VFIOUSER=1 00:01:30.306 ++ SPDK_RUN_UBSAN=1 00:01:30.306 ++ NET_TYPE=phy 00:01:30.306 ++ RUN_NIGHTLY=0 00:01:30.306 + case $SPDK_TEST_NVMF_NICS in 00:01:30.306 + DRIVERS=ice 00:01:30.306 + [[ tcp == \r\d\m\a ]] 00:01:30.306 + [[ -n ice ]] 00:01:30.306 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:30.306 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:30.306 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:30.306 rmmod: ERROR: Module irdma is not currently loaded 00:01:30.306 rmmod: ERROR: Module i40iw is not currently loaded 00:01:30.306 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:30.306 + true 00:01:30.306 + for D in $DRIVERS 00:01:30.306 + sudo modprobe ice 00:01:30.306 + exit 0 00:01:30.317 [Pipeline] } 00:01:30.331 [Pipeline] // withEnv 00:01:30.337 [Pipeline] } 00:01:30.351 [Pipeline] // stage 00:01:30.361 [Pipeline] catchError 00:01:30.363 [Pipeline] { 00:01:30.377 [Pipeline] timeout 00:01:30.377 Timeout set to expire in 1 hr 0 min 00:01:30.379 [Pipeline] { 00:01:30.393 [Pipeline] stage 00:01:30.396 [Pipeline] { (Tests) 00:01:30.410 [Pipeline] sh 00:01:30.699 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.699 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.699 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.699 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:30.699 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:30.699 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.699 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:30.699 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.699 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:30.699 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:30.699 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:30.699 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:30.699 + source /etc/os-release 00:01:30.699 ++ NAME='Fedora Linux' 00:01:30.699 ++ VERSION='39 (Cloud Edition)' 00:01:30.699 ++ ID=fedora 00:01:30.699 ++ VERSION_ID=39 00:01:30.699 ++ VERSION_CODENAME= 00:01:30.699 ++ PLATFORM_ID=platform:f39 00:01:30.699 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:30.699 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:30.699 ++ LOGO=fedora-logo-icon 00:01:30.699 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:30.699 ++ HOME_URL=https://fedoraproject.org/ 00:01:30.699 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:30.699 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:30.699 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:30.699 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:30.699 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:30.699 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:30.699 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:30.699 ++ SUPPORT_END=2024-11-12 00:01:30.699 ++ VARIANT='Cloud Edition' 00:01:30.699 ++ VARIANT_ID=cloud 00:01:30.699 + uname -a 00:01:30.699 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:30.699 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:34.001 Hugepages 00:01:34.001 node hugesize free / total 00:01:34.001 node0 1048576kB 0 / 0 00:01:34.001 node0 2048kB 0 / 0 00:01:34.001 node1 1048576kB 0 / 0 00:01:34.001 node1 2048kB 0 / 0 00:01:34.001 00:01:34.001 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:34.001 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:34.001 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:34.001 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:34.001 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:34.001 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:34.001 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:34.001 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:34.001 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:34.001 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:34.001 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:34.001 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:34.001 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:34.001 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:34.001 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:34.001 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:34.001 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:34.001 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:34.001 + rm -f /tmp/spdk-ld-path 00:01:34.001 + source autorun-spdk.conf 00:01:34.001 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.001 ++ SPDK_TEST_NVMF=1 00:01:34.001 ++ SPDK_TEST_NVME_CLI=1 00:01:34.001 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.001 ++ SPDK_TEST_NVMF_NICS=e810 00:01:34.001 ++ SPDK_TEST_VFIOUSER=1 00:01:34.001 ++ SPDK_RUN_UBSAN=1 00:01:34.001 ++ NET_TYPE=phy 00:01:34.001 ++ RUN_NIGHTLY=0 00:01:34.001 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:34.001 + [[ -n '' ]] 00:01:34.001 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.001 + for M in /var/spdk/build-*-manifest.txt 00:01:34.001 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:34.001 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:34.001 + for M in /var/spdk/build-*-manifest.txt 00:01:34.001 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:34.001 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:34.001 + for M in /var/spdk/build-*-manifest.txt 00:01:34.001 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:34.001 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:34.001 ++ uname 00:01:34.001 + [[ Linux == \L\i\n\u\x ]] 00:01:34.001 + sudo dmesg -T 00:01:34.001 + sudo dmesg --clear 00:01:34.001 + dmesg_pid=2665732 00:01:34.001 + [[ Fedora Linux == FreeBSD ]] 00:01:34.001 + sudo dmesg -Tw 00:01:34.001 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.001 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:34.001 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:34.001 + [[ -x /usr/src/fio-static/fio ]] 00:01:34.001 + export FIO_BIN=/usr/src/fio-static/fio 00:01:34.001 + FIO_BIN=/usr/src/fio-static/fio 00:01:34.001 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:34.001 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:34.001 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:34.001 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.001 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:34.001 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:34.001 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.001 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:34.001 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:34.001 04:12:47 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:34.001 04:12:47 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:34.001 04:12:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:34.001 04:12:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:34.001 04:12:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:34.001 04:12:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:34.001 04:12:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:34.001 04:12:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:34.001 04:12:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:34.001 04:12:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:34.001 04:12:47 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:34.001 04:12:47 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:34.001 04:12:47 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:34.001 04:12:47 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:34.001 04:12:47 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:34.001 04:12:47 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:34.001 04:12:47 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:34.001 04:12:47 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:34.001 04:12:47 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:34.001 04:12:47 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.001 04:12:47 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.001 04:12:47 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.001 04:12:47 -- paths/export.sh@5 -- $ export PATH 00:01:34.001 04:12:47 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:34.001 04:12:47 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:34.001 04:12:47 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:34.001 04:12:47 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730776367.XXXXXX 00:01:34.001 04:12:47 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730776367.d2teZr 00:01:34.001 04:12:47 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:34.001 04:12:47 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:34.001 04:12:47 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:34.001 04:12:47 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:34.001 04:12:47 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:34.001 04:12:47 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:34.001 04:12:47 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:34.001 04:12:47 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.002 04:12:47 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:34.002 04:12:47 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:34.002 04:12:47 -- pm/common@17 -- $ local monitor 00:01:34.002 04:12:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.002 04:12:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.002 04:12:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.002 04:12:47 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:34.002 04:12:47 -- pm/common@21 -- $ date +%s 00:01:34.002 04:12:47 -- pm/common@25 -- $ sleep 1 00:01:34.002 04:12:47 -- pm/common@21 -- $ date +%s 00:01:34.002 04:12:47 -- pm/common@21 -- $ date +%s 00:01:34.002 04:12:47 -- pm/common@21 -- $ date +%s 00:01:34.002 04:12:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730776367 00:01:34.002 04:12:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730776367 00:01:34.002 04:12:47 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730776367 00:01:34.002 04:12:47 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1730776367 00:01:34.002 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730776367_collect-vmstat.pm.log 00:01:34.002 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730776367_collect-cpu-load.pm.log 00:01:34.002 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730776367_collect-cpu-temp.pm.log 00:01:34.002 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1730776367_collect-bmc-pm.bmc.pm.log 00:01:34.943 04:12:48 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:34.943 04:12:48 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:34.943 04:12:48 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:34.943 04:12:48 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:34.943 04:12:48 -- spdk/autobuild.sh@16 -- $ date -u 00:01:34.943 Tue Nov 5 03:12:48 AM UTC 2024 00:01:34.943 04:12:48 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:34.943 v25.01-pre-125-gd0fd7ad59 00:01:34.943 04:12:48 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:34.943 04:12:48 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:34.943 04:12:48 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:34.943 04:12:48 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:34.943 04:12:48 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:34.943 04:12:48 -- common/autotest_common.sh@10 -- $ set +x 00:01:34.943 ************************************ 00:01:34.943 START TEST ubsan 00:01:34.943 ************************************ 00:01:34.943 04:12:48 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:34.943 using ubsan 00:01:34.943 00:01:34.943 real 0m0.000s 00:01:34.943 user 0m0.000s 00:01:34.943 sys 0m0.000s 00:01:34.943 04:12:48 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:34.943 04:12:48 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:34.943 ************************************ 00:01:34.943 END TEST ubsan 00:01:34.943 ************************************ 00:01:35.204 04:12:48 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:35.204 04:12:48 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:35.204 04:12:48 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:35.204 04:12:48 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:35.204 04:12:48 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:35.204 04:12:48 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:35.204 04:12:48 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:35.204 04:12:48 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:35.204 04:12:48 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:35.204 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:35.204 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:35.775 Using 'verbs' RDMA provider 00:01:51.259 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:03.579 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:03.579 Creating mk/config.mk...done. 00:02:03.579 Creating mk/cc.flags.mk...done. 00:02:03.579 Type 'make' to build. 00:02:03.579 04:13:16 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:02:03.579 04:13:16 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:03.579 04:13:16 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:03.579 04:13:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:03.579 ************************************ 00:02:03.579 START TEST make 00:02:03.579 ************************************ 00:02:03.579 04:13:17 make -- common/autotest_common.sh@1127 -- $ make -j144 00:02:03.841 make[1]: Nothing to be done for 'all'. 00:02:05.232 The Meson build system 00:02:05.232 Version: 1.5.0 00:02:05.232 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:05.232 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:05.232 Build type: native build 00:02:05.232 Project name: libvfio-user 00:02:05.232 Project version: 0.0.1 00:02:05.232 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:05.232 C linker for the host machine: cc ld.bfd 2.40-14 00:02:05.232 Host machine cpu family: x86_64 00:02:05.232 Host machine cpu: x86_64 00:02:05.232 Run-time dependency threads found: YES 00:02:05.232 Library dl found: YES 00:02:05.232 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:05.232 Run-time dependency json-c found: YES 0.17 00:02:05.232 Run-time dependency cmocka found: YES 1.1.7 00:02:05.232 Program pytest-3 found: NO 00:02:05.232 Program flake8 found: NO 00:02:05.232 Program misspell-fixer found: NO 00:02:05.232 Program restructuredtext-lint found: NO 00:02:05.232 Program valgrind found: YES (/usr/bin/valgrind) 00:02:05.232 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:05.232 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:05.232 Compiler for C supports arguments -Wwrite-strings: YES 00:02:05.232 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:05.232 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:05.232 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:05.232 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:05.232 Build targets in project: 8 00:02:05.232 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:05.232 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:05.232 00:02:05.232 libvfio-user 0.0.1 00:02:05.232 00:02:05.232 User defined options 00:02:05.232 buildtype : debug 00:02:05.232 default_library: shared 00:02:05.232 libdir : /usr/local/lib 00:02:05.232 00:02:05.232 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:05.490 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:05.490 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:05.490 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:05.490 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:05.490 [4/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:05.490 [5/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:05.490 [6/37] Compiling C object samples/null.p/null.c.o 00:02:05.490 [7/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:05.490 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:05.490 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:05.490 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:05.490 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:05.490 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:05.490 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:05.490 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:05.751 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:05.751 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:05.751 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:05.751 [18/37] Compiling C object samples/server.p/server.c.o 00:02:05.751 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:05.751 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:05.751 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:05.751 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:05.751 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:05.751 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:05.751 [25/37] Compiling C object samples/client.p/client.c.o 00:02:05.751 [26/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:05.751 [27/37] Linking target samples/client 00:02:05.751 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:05.751 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:05.751 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:05.751 [31/37] Linking target test/unit_tests 00:02:06.011 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:06.011 [33/37] Linking target samples/server 00:02:06.011 [34/37] Linking target samples/null 00:02:06.011 [35/37] Linking target samples/lspci 00:02:06.011 [36/37] Linking target samples/gpio-pci-idio-16 00:02:06.011 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:06.011 INFO: autodetecting backend as ninja 00:02:06.011 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.011 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:06.271 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:06.271 ninja: no work to do. 00:02:12.859 The Meson build system 00:02:12.859 Version: 1.5.0 00:02:12.859 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:12.859 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:12.859 Build type: native build 00:02:12.859 Program cat found: YES (/usr/bin/cat) 00:02:12.859 Project name: DPDK 00:02:12.859 Project version: 24.03.0 00:02:12.859 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:12.859 C linker for the host machine: cc ld.bfd 2.40-14 00:02:12.859 Host machine cpu family: x86_64 00:02:12.860 Host machine cpu: x86_64 00:02:12.860 Message: ## Building in Developer Mode ## 00:02:12.860 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:12.860 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:12.860 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:12.860 Program python3 found: YES (/usr/bin/python3) 00:02:12.860 Program cat found: YES (/usr/bin/cat) 00:02:12.860 Compiler for C supports arguments -march=native: YES 00:02:12.860 Checking for size of "void *" : 8 00:02:12.860 Checking for size of "void *" : 8 (cached) 00:02:12.860 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:12.860 Library m found: YES 00:02:12.860 Library numa found: YES 00:02:12.860 Has header "numaif.h" : YES 00:02:12.860 Library fdt found: NO 00:02:12.860 Library execinfo found: NO 00:02:12.860 Has header "execinfo.h" : YES 00:02:12.860 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:12.860 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:12.860 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:12.860 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:12.860 Run-time dependency openssl found: YES 3.1.1 00:02:12.860 Run-time dependency libpcap found: YES 1.10.4 00:02:12.860 Has header "pcap.h" with dependency libpcap: YES 00:02:12.860 Compiler for C supports arguments -Wcast-qual: YES 00:02:12.860 Compiler for C supports arguments -Wdeprecated: YES 00:02:12.860 Compiler for C supports arguments -Wformat: YES 00:02:12.860 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:12.860 Compiler for C supports arguments -Wformat-security: NO 00:02:12.860 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:12.860 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:12.860 Compiler for C supports arguments -Wnested-externs: YES 00:02:12.860 Compiler for C supports arguments -Wold-style-definition: YES 00:02:12.860 Compiler for C supports arguments -Wpointer-arith: YES 00:02:12.860 Compiler for C supports arguments -Wsign-compare: YES 00:02:12.860 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:12.860 Compiler for C supports arguments -Wundef: YES 00:02:12.860 Compiler for C supports arguments -Wwrite-strings: YES 00:02:12.860 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:12.860 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:12.860 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:12.860 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:12.860 Program objdump found: YES (/usr/bin/objdump) 00:02:12.860 Compiler for C supports arguments -mavx512f: YES 00:02:12.860 Checking if "AVX512 checking" compiles: YES 00:02:12.860 Fetching value of define "__SSE4_2__" : 1 00:02:12.860 Fetching value of define "__AES__" : 1 00:02:12.860 Fetching value of define "__AVX__" : 1 00:02:12.860 Fetching value of define "__AVX2__" : 1 00:02:12.860 Fetching value of define "__AVX512BW__" : 1 00:02:12.860 Fetching value of define "__AVX512CD__" : 1 00:02:12.860 Fetching value of define "__AVX512DQ__" : 1 00:02:12.860 Fetching value of define "__AVX512F__" : 1 00:02:12.860 Fetching value of define "__AVX512VL__" : 1 00:02:12.860 Fetching value of define "__PCLMUL__" : 1 00:02:12.860 Fetching value of define "__RDRND__" : 1 00:02:12.860 Fetching value of define "__RDSEED__" : 1 00:02:12.860 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:12.860 Fetching value of define "__znver1__" : (undefined) 00:02:12.860 Fetching value of define "__znver2__" : (undefined) 00:02:12.860 Fetching value of define "__znver3__" : (undefined) 00:02:12.860 Fetching value of define "__znver4__" : (undefined) 00:02:12.860 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:12.860 Message: lib/log: Defining dependency "log" 00:02:12.860 Message: lib/kvargs: Defining dependency "kvargs" 00:02:12.860 Message: lib/telemetry: Defining dependency "telemetry" 00:02:12.860 Checking for function "getentropy" : NO 00:02:12.860 Message: lib/eal: Defining dependency "eal" 00:02:12.860 Message: lib/ring: Defining dependency "ring" 00:02:12.860 Message: lib/rcu: Defining dependency "rcu" 00:02:12.860 Message: lib/mempool: Defining dependency "mempool" 00:02:12.860 Message: lib/mbuf: Defining dependency "mbuf" 00:02:12.860 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:12.860 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.860 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.860 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:12.860 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:12.860 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:12.860 Compiler for C supports arguments -mpclmul: YES 00:02:12.860 Compiler for C supports arguments -maes: YES 00:02:12.860 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.860 Compiler for C supports arguments -mavx512bw: YES 00:02:12.860 Compiler for C supports arguments -mavx512dq: YES 00:02:12.860 Compiler for C supports arguments -mavx512vl: YES 00:02:12.860 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:12.860 Compiler for C supports arguments -mavx2: YES 00:02:12.860 Compiler for C supports arguments -mavx: YES 00:02:12.860 Message: lib/net: Defining dependency "net" 00:02:12.860 Message: lib/meter: Defining dependency "meter" 00:02:12.860 Message: lib/ethdev: Defining dependency "ethdev" 00:02:12.860 Message: lib/pci: Defining dependency "pci" 00:02:12.860 Message: lib/cmdline: Defining dependency "cmdline" 00:02:12.860 Message: lib/hash: Defining dependency "hash" 00:02:12.860 Message: lib/timer: Defining dependency "timer" 00:02:12.860 Message: lib/compressdev: Defining dependency "compressdev" 00:02:12.860 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:12.860 Message: lib/dmadev: Defining dependency "dmadev" 00:02:12.860 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:12.860 Message: lib/power: Defining dependency "power" 00:02:12.860 Message: lib/reorder: Defining dependency "reorder" 00:02:12.860 Message: lib/security: Defining dependency "security" 00:02:12.860 Has header "linux/userfaultfd.h" : YES 00:02:12.860 Has header "linux/vduse.h" : YES 00:02:12.860 Message: lib/vhost: Defining dependency "vhost" 00:02:12.860 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.860 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.860 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.860 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.860 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:12.860 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:12.860 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:12.860 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:12.860 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:12.860 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:12.860 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.860 Configuring doxy-api-html.conf using configuration 00:02:12.860 Configuring doxy-api-man.conf using configuration 00:02:12.860 Program mandb found: YES (/usr/bin/mandb) 00:02:12.860 Program sphinx-build found: NO 00:02:12.860 Configuring rte_build_config.h using configuration 00:02:12.860 Message: 00:02:12.860 ================= 00:02:12.860 Applications Enabled 00:02:12.860 ================= 00:02:12.860 00:02:12.860 apps: 00:02:12.860 00:02:12.860 00:02:12.860 Message: 00:02:12.860 ================= 00:02:12.860 Libraries Enabled 00:02:12.860 ================= 00:02:12.860 00:02:12.860 libs: 00:02:12.860 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.860 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:12.860 cryptodev, dmadev, power, reorder, security, vhost, 00:02:12.860 00:02:12.860 Message: 00:02:12.860 =============== 00:02:12.860 Drivers Enabled 00:02:12.860 =============== 00:02:12.860 00:02:12.860 common: 00:02:12.860 00:02:12.860 bus: 00:02:12.860 pci, vdev, 00:02:12.860 mempool: 00:02:12.860 ring, 00:02:12.860 dma: 00:02:12.860 00:02:12.860 net: 00:02:12.860 00:02:12.860 crypto: 00:02:12.860 00:02:12.860 compress: 00:02:12.860 00:02:12.860 vdpa: 00:02:12.860 00:02:12.860 00:02:12.860 Message: 00:02:12.860 ================= 00:02:12.860 Content Skipped 00:02:12.860 ================= 00:02:12.860 00:02:12.860 apps: 00:02:12.860 dumpcap: explicitly disabled via build config 00:02:12.860 graph: explicitly disabled via build config 00:02:12.860 pdump: explicitly disabled via build config 00:02:12.860 proc-info: explicitly disabled via build config 00:02:12.860 test-acl: explicitly disabled via build config 00:02:12.860 test-bbdev: explicitly disabled via build config 00:02:12.860 test-cmdline: explicitly disabled via build config 00:02:12.860 test-compress-perf: explicitly disabled via build config 00:02:12.860 test-crypto-perf: explicitly disabled via build config 00:02:12.860 test-dma-perf: explicitly disabled via build config 00:02:12.860 test-eventdev: explicitly disabled via build config 00:02:12.860 test-fib: explicitly disabled via build config 00:02:12.860 test-flow-perf: explicitly disabled via build config 00:02:12.860 test-gpudev: explicitly disabled via build config 00:02:12.860 test-mldev: explicitly disabled via build config 00:02:12.860 test-pipeline: explicitly disabled via build config 00:02:12.860 test-pmd: explicitly disabled via build config 00:02:12.860 test-regex: explicitly disabled via build config 00:02:12.860 test-sad: explicitly disabled via build config 00:02:12.860 test-security-perf: explicitly disabled via build config 00:02:12.860 00:02:12.860 libs: 00:02:12.860 argparse: explicitly disabled via build config 00:02:12.860 metrics: explicitly disabled via build config 00:02:12.860 acl: explicitly disabled via build config 00:02:12.860 bbdev: explicitly disabled via build config 00:02:12.860 bitratestats: explicitly disabled via build config 00:02:12.860 bpf: explicitly disabled via build config 00:02:12.860 cfgfile: explicitly disabled via build config 00:02:12.861 distributor: explicitly disabled via build config 00:02:12.861 efd: explicitly disabled via build config 00:02:12.861 eventdev: explicitly disabled via build config 00:02:12.861 dispatcher: explicitly disabled via build config 00:02:12.861 gpudev: explicitly disabled via build config 00:02:12.861 gro: explicitly disabled via build config 00:02:12.861 gso: explicitly disabled via build config 00:02:12.861 ip_frag: explicitly disabled via build config 00:02:12.861 jobstats: explicitly disabled via build config 00:02:12.861 latencystats: explicitly disabled via build config 00:02:12.861 lpm: explicitly disabled via build config 00:02:12.861 member: explicitly disabled via build config 00:02:12.861 pcapng: explicitly disabled via build config 00:02:12.861 rawdev: explicitly disabled via build config 00:02:12.861 regexdev: explicitly disabled via build config 00:02:12.861 mldev: explicitly disabled via build config 00:02:12.861 rib: explicitly disabled via build config 00:02:12.861 sched: explicitly disabled via build config 00:02:12.861 stack: explicitly disabled via build config 00:02:12.861 ipsec: explicitly disabled via build config 00:02:12.861 pdcp: explicitly disabled via build config 00:02:12.861 fib: explicitly disabled via build config 00:02:12.861 port: explicitly disabled via build config 00:02:12.861 pdump: explicitly disabled via build config 00:02:12.861 table: explicitly disabled via build config 00:02:12.861 pipeline: explicitly disabled via build config 00:02:12.861 graph: explicitly disabled via build config 00:02:12.861 node: explicitly disabled via build config 00:02:12.861 00:02:12.861 drivers: 00:02:12.861 common/cpt: not in enabled drivers build config 00:02:12.861 common/dpaax: not in enabled drivers build config 00:02:12.861 common/iavf: not in enabled drivers build config 00:02:12.861 common/idpf: not in enabled drivers build config 00:02:12.861 common/ionic: not in enabled drivers build config 00:02:12.861 common/mvep: not in enabled drivers build config 00:02:12.861 common/octeontx: not in enabled drivers build config 00:02:12.861 bus/auxiliary: not in enabled drivers build config 00:02:12.861 bus/cdx: not in enabled drivers build config 00:02:12.861 bus/dpaa: not in enabled drivers build config 00:02:12.861 bus/fslmc: not in enabled drivers build config 00:02:12.861 bus/ifpga: not in enabled drivers build config 00:02:12.861 bus/platform: not in enabled drivers build config 00:02:12.861 bus/uacce: not in enabled drivers build config 00:02:12.861 bus/vmbus: not in enabled drivers build config 00:02:12.861 common/cnxk: not in enabled drivers build config 00:02:12.861 common/mlx5: not in enabled drivers build config 00:02:12.861 common/nfp: not in enabled drivers build config 00:02:12.861 common/nitrox: not in enabled drivers build config 00:02:12.861 common/qat: not in enabled drivers build config 00:02:12.861 common/sfc_efx: not in enabled drivers build config 00:02:12.861 mempool/bucket: not in enabled drivers build config 00:02:12.861 mempool/cnxk: not in enabled drivers build config 00:02:12.861 mempool/dpaa: not in enabled drivers build config 00:02:12.861 mempool/dpaa2: not in enabled drivers build config 00:02:12.861 mempool/octeontx: not in enabled drivers build config 00:02:12.861 mempool/stack: not in enabled drivers build config 00:02:12.861 dma/cnxk: not in enabled drivers build config 00:02:12.861 dma/dpaa: not in enabled drivers build config 00:02:12.861 dma/dpaa2: not in enabled drivers build config 00:02:12.861 dma/hisilicon: not in enabled drivers build config 00:02:12.861 dma/idxd: not in enabled drivers build config 00:02:12.861 dma/ioat: not in enabled drivers build config 00:02:12.861 dma/skeleton: not in enabled drivers build config 00:02:12.861 net/af_packet: not in enabled drivers build config 00:02:12.861 net/af_xdp: not in enabled drivers build config 00:02:12.861 net/ark: not in enabled drivers build config 00:02:12.861 net/atlantic: not in enabled drivers build config 00:02:12.861 net/avp: not in enabled drivers build config 00:02:12.861 net/axgbe: not in enabled drivers build config 00:02:12.861 net/bnx2x: not in enabled drivers build config 00:02:12.861 net/bnxt: not in enabled drivers build config 00:02:12.861 net/bonding: not in enabled drivers build config 00:02:12.861 net/cnxk: not in enabled drivers build config 00:02:12.861 net/cpfl: not in enabled drivers build config 00:02:12.861 net/cxgbe: not in enabled drivers build config 00:02:12.861 net/dpaa: not in enabled drivers build config 00:02:12.861 net/dpaa2: not in enabled drivers build config 00:02:12.861 net/e1000: not in enabled drivers build config 00:02:12.861 net/ena: not in enabled drivers build config 00:02:12.861 net/enetc: not in enabled drivers build config 00:02:12.861 net/enetfec: not in enabled drivers build config 00:02:12.861 net/enic: not in enabled drivers build config 00:02:12.861 net/failsafe: not in enabled drivers build config 00:02:12.861 net/fm10k: not in enabled drivers build config 00:02:12.861 net/gve: not in enabled drivers build config 00:02:12.861 net/hinic: not in enabled drivers build config 00:02:12.861 net/hns3: not in enabled drivers build config 00:02:12.861 net/i40e: not in enabled drivers build config 00:02:12.861 net/iavf: not in enabled drivers build config 00:02:12.861 net/ice: not in enabled drivers build config 00:02:12.861 net/idpf: not in enabled drivers build config 00:02:12.861 net/igc: not in enabled drivers build config 00:02:12.861 net/ionic: not in enabled drivers build config 00:02:12.861 net/ipn3ke: not in enabled drivers build config 00:02:12.861 net/ixgbe: not in enabled drivers build config 00:02:12.861 net/mana: not in enabled drivers build config 00:02:12.861 net/memif: not in enabled drivers build config 00:02:12.861 net/mlx4: not in enabled drivers build config 00:02:12.861 net/mlx5: not in enabled drivers build config 00:02:12.861 net/mvneta: not in enabled drivers build config 00:02:12.861 net/mvpp2: not in enabled drivers build config 00:02:12.861 net/netvsc: not in enabled drivers build config 00:02:12.861 net/nfb: not in enabled drivers build config 00:02:12.861 net/nfp: not in enabled drivers build config 00:02:12.861 net/ngbe: not in enabled drivers build config 00:02:12.861 net/null: not in enabled drivers build config 00:02:12.861 net/octeontx: not in enabled drivers build config 00:02:12.861 net/octeon_ep: not in enabled drivers build config 00:02:12.861 net/pcap: not in enabled drivers build config 00:02:12.861 net/pfe: not in enabled drivers build config 00:02:12.861 net/qede: not in enabled drivers build config 00:02:12.861 net/ring: not in enabled drivers build config 00:02:12.861 net/sfc: not in enabled drivers build config 00:02:12.861 net/softnic: not in enabled drivers build config 00:02:12.861 net/tap: not in enabled drivers build config 00:02:12.861 net/thunderx: not in enabled drivers build config 00:02:12.861 net/txgbe: not in enabled drivers build config 00:02:12.861 net/vdev_netvsc: not in enabled drivers build config 00:02:12.861 net/vhost: not in enabled drivers build config 00:02:12.861 net/virtio: not in enabled drivers build config 00:02:12.861 net/vmxnet3: not in enabled drivers build config 00:02:12.861 raw/*: missing internal dependency, "rawdev" 00:02:12.861 crypto/armv8: not in enabled drivers build config 00:02:12.861 crypto/bcmfs: not in enabled drivers build config 00:02:12.861 crypto/caam_jr: not in enabled drivers build config 00:02:12.861 crypto/ccp: not in enabled drivers build config 00:02:12.861 crypto/cnxk: not in enabled drivers build config 00:02:12.861 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.861 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.861 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.861 crypto/mlx5: not in enabled drivers build config 00:02:12.861 crypto/mvsam: not in enabled drivers build config 00:02:12.861 crypto/nitrox: not in enabled drivers build config 00:02:12.861 crypto/null: not in enabled drivers build config 00:02:12.861 crypto/octeontx: not in enabled drivers build config 00:02:12.861 crypto/openssl: not in enabled drivers build config 00:02:12.861 crypto/scheduler: not in enabled drivers build config 00:02:12.861 crypto/uadk: not in enabled drivers build config 00:02:12.861 crypto/virtio: not in enabled drivers build config 00:02:12.861 compress/isal: not in enabled drivers build config 00:02:12.861 compress/mlx5: not in enabled drivers build config 00:02:12.861 compress/nitrox: not in enabled drivers build config 00:02:12.861 compress/octeontx: not in enabled drivers build config 00:02:12.861 compress/zlib: not in enabled drivers build config 00:02:12.861 regex/*: missing internal dependency, "regexdev" 00:02:12.861 ml/*: missing internal dependency, "mldev" 00:02:12.861 vdpa/ifc: not in enabled drivers build config 00:02:12.861 vdpa/mlx5: not in enabled drivers build config 00:02:12.861 vdpa/nfp: not in enabled drivers build config 00:02:12.861 vdpa/sfc: not in enabled drivers build config 00:02:12.861 event/*: missing internal dependency, "eventdev" 00:02:12.861 baseband/*: missing internal dependency, "bbdev" 00:02:12.861 gpu/*: missing internal dependency, "gpudev" 00:02:12.861 00:02:12.861 00:02:12.861 Build targets in project: 84 00:02:12.861 00:02:12.861 DPDK 24.03.0 00:02:12.861 00:02:12.861 User defined options 00:02:12.861 buildtype : debug 00:02:12.861 default_library : shared 00:02:12.861 libdir : lib 00:02:12.861 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:12.861 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:12.861 c_link_args : 00:02:12.861 cpu_instruction_set: native 00:02:12.861 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:02:12.861 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:02:12.861 enable_docs : false 00:02:12.861 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:12.861 enable_kmods : false 00:02:12.861 max_lcores : 128 00:02:12.861 tests : false 00:02:12.861 00:02:12.861 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.861 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:13.132 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:13.132 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:13.132 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:13.132 [4/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:13.132 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:13.132 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:13.132 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:13.132 [8/267] Linking static target lib/librte_kvargs.a 00:02:13.132 [9/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:13.132 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:13.132 [11/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:13.132 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:13.132 [13/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:13.132 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:13.132 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:13.132 [16/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:13.132 [17/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.392 [18/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:13.392 [19/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:13.392 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:13.392 [21/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:13.392 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:13.392 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.392 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:13.392 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.392 [26/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:13.392 [27/267] Linking static target lib/librte_log.a 00:02:13.392 [28/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:13.392 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:13.392 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:13.392 [31/267] Linking static target lib/librte_pci.a 00:02:13.392 [32/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:13.392 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.392 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:13.392 [35/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:13.392 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:13.392 [37/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:13.392 [38/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:13.652 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:13.652 [40/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.652 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.652 [42/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.652 [43/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.652 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:13.652 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.652 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.652 [47/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.652 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.652 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.652 [50/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.652 [51/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:13.652 [52/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.652 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:13.652 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.652 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.652 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:13.652 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.652 [58/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.652 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.652 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.652 [61/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.652 [62/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:13.652 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.652 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.652 [65/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.652 [66/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:13.652 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:13.652 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.652 [69/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.652 [70/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.652 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:13.652 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.652 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:13.652 [74/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.652 [75/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.652 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.652 [77/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.652 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.652 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.652 [80/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.652 [81/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.652 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:13.652 [83/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.652 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:13.652 [85/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:13.652 [86/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.652 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:13.652 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.652 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.652 [90/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:13.652 [91/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.652 [92/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:13.652 [93/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.652 [94/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:13.652 [95/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.652 [96/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.652 [97/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.652 [98/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.652 [99/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.652 [100/267] Linking static target lib/librte_cmdline.a 00:02:13.652 [101/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:13.652 [102/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.652 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:13.652 [104/267] Linking static target lib/librte_ring.a 00:02:13.652 [105/267] Linking static target lib/librte_meter.a 00:02:13.652 [106/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:13.652 [107/267] Linking static target lib/librte_net.a 00:02:13.652 [108/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:13.914 [109/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:13.914 [110/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:13.914 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:13.914 [112/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:13.914 [113/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:13.914 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:13.914 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:13.914 [116/267] Linking static target lib/librte_telemetry.a 00:02:13.914 [117/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:13.914 [118/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:13.914 [119/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:13.914 [120/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:13.914 [121/267] Linking static target lib/librte_timer.a 00:02:13.914 [122/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.914 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.914 [124/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:13.914 [125/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:13.914 [126/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:13.914 [127/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:13.914 [128/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:13.914 [129/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:13.914 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:13.914 [131/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:13.914 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.914 [133/267] Linking static target lib/librte_mempool.a 00:02:13.914 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:13.914 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.914 [136/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:13.914 [137/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:13.914 [138/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:13.914 [139/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.914 [140/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:13.914 [141/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:13.914 [142/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:13.914 [143/267] Linking static target lib/librte_dmadev.a 00:02:13.914 [144/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:13.914 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:13.914 [146/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:13.914 [147/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:13.914 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.914 [149/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:13.914 [150/267] Linking target lib/librte_log.so.24.1 00:02:13.914 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:13.914 [152/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:13.914 [153/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:13.914 [154/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:13.914 [155/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:13.914 [156/267] Linking static target lib/librte_power.a 00:02:13.914 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:13.914 [158/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:13.914 [159/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:13.914 [160/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:13.914 [161/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:13.915 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.915 [163/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:13.915 [164/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:13.915 [165/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:13.915 [166/267] Linking static target lib/librte_rcu.a 00:02:13.915 [167/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:13.915 [168/267] Linking static target lib/librte_compressdev.a 00:02:13.915 [169/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:13.915 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:13.915 [171/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.915 [172/267] Linking static target lib/librte_eal.a 00:02:13.915 [173/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:13.915 [174/267] Linking static target lib/librte_mbuf.a 00:02:13.915 [175/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.915 [176/267] Linking static target lib/librte_reorder.a 00:02:13.915 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:13.915 [178/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:13.915 [179/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:13.915 [180/267] Linking static target lib/librte_security.a 00:02:13.915 [181/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.915 [182/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:13.915 [183/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:13.915 [184/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:13.915 [185/267] Linking static target drivers/librte_bus_vdev.a 00:02:13.915 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:14.176 [187/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:14.176 [188/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.176 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:14.176 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:14.176 [191/267] Linking target lib/librte_kvargs.so.24.1 00:02:14.176 [192/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.176 [193/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:14.176 [194/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.176 [195/267] Linking static target lib/librte_hash.a 00:02:14.177 [196/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.177 [197/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.177 [198/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:14.177 [199/267] Linking static target drivers/librte_bus_pci.a 00:02:14.177 [200/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:14.177 [201/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.177 [202/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:14.177 [203/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:14.177 [204/267] Linking static target drivers/librte_mempool_ring.a 00:02:14.177 [205/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.438 [206/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.438 [207/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:14.438 [208/267] Linking static target lib/librte_cryptodev.a 00:02:14.438 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:14.438 [210/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.438 [211/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.438 [212/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.438 [213/267] Linking target lib/librte_telemetry.so.24.1 00:02:14.699 [214/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.699 [215/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:14.699 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.699 [217/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:14.699 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:14.699 [219/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.699 [220/267] Linking static target lib/librte_ethdev.a 00:02:14.960 [221/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.960 [222/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.960 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.960 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.220 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.220 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.791 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.791 [228/267] Linking static target lib/librte_vhost.a 00:02:16.737 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.123 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.709 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.650 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.650 [233/267] Linking target lib/librte_eal.so.24.1 00:02:25.650 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:25.650 [235/267] Linking target lib/librte_meter.so.24.1 00:02:25.650 [236/267] Linking target lib/librte_timer.so.24.1 00:02:25.650 [237/267] Linking target lib/librte_ring.so.24.1 00:02:25.650 [238/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:25.650 [239/267] Linking target lib/librte_pci.so.24.1 00:02:25.650 [240/267] Linking target lib/librte_dmadev.so.24.1 00:02:25.911 [241/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:25.911 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:25.911 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:25.911 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:25.911 [245/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:25.911 [246/267] Linking target lib/librte_mempool.so.24.1 00:02:25.911 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:25.911 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:25.911 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:25.911 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:26.172 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:26.172 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:26.172 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:26.172 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:26.172 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:26.172 [256/267] Linking target lib/librte_net.so.24.1 00:02:26.172 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:26.432 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:26.432 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:26.432 [260/267] Linking target lib/librte_hash.so.24.1 00:02:26.432 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:26.432 [262/267] Linking target lib/librte_ethdev.so.24.1 00:02:26.432 [263/267] Linking target lib/librte_security.so.24.1 00:02:26.432 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:26.432 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:26.693 [266/267] Linking target lib/librte_power.so.24.1 00:02:26.693 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:26.693 INFO: autodetecting backend as ninja 00:02:26.693 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:29.991 CC lib/ut/ut.o 00:02:29.991 CC lib/log/log.o 00:02:29.991 CC lib/log/log_flags.o 00:02:29.991 CC lib/log/log_deprecated.o 00:02:29.991 CC lib/ut_mock/mock.o 00:02:30.252 LIB libspdk_log.a 00:02:30.252 LIB libspdk_ut.a 00:02:30.252 LIB libspdk_ut_mock.a 00:02:30.252 SO libspdk_ut.so.2.0 00:02:30.252 SO libspdk_log.so.7.1 00:02:30.252 SO libspdk_ut_mock.so.6.0 00:02:30.252 SYMLINK libspdk_ut.so 00:02:30.252 SYMLINK libspdk_log.so 00:02:30.514 SYMLINK libspdk_ut_mock.so 00:02:30.774 CXX lib/trace_parser/trace.o 00:02:30.774 CC lib/dma/dma.o 00:02:30.774 CC lib/ioat/ioat.o 00:02:30.774 CC lib/util/cpuset.o 00:02:30.774 CC lib/util/base64.o 00:02:30.774 CC lib/util/bit_array.o 00:02:30.774 CC lib/util/crc16.o 00:02:30.774 CC lib/util/crc32.o 00:02:30.774 CC lib/util/crc32c.o 00:02:30.774 CC lib/util/crc32_ieee.o 00:02:30.774 CC lib/util/crc64.o 00:02:30.774 CC lib/util/dif.o 00:02:30.774 CC lib/util/fd.o 00:02:30.774 CC lib/util/fd_group.o 00:02:30.774 CC lib/util/file.o 00:02:30.774 CC lib/util/hexlify.o 00:02:30.774 CC lib/util/iov.o 00:02:30.774 CC lib/util/math.o 00:02:30.774 CC lib/util/net.o 00:02:30.774 CC lib/util/pipe.o 00:02:30.774 CC lib/util/strerror_tls.o 00:02:30.774 CC lib/util/string.o 00:02:30.774 CC lib/util/uuid.o 00:02:30.774 CC lib/util/xor.o 00:02:30.774 CC lib/util/zipf.o 00:02:30.774 CC lib/util/md5.o 00:02:31.034 CC lib/vfio_user/host/vfio_user_pci.o 00:02:31.034 CC lib/vfio_user/host/vfio_user.o 00:02:31.034 LIB libspdk_dma.a 00:02:31.034 SO libspdk_dma.so.5.0 00:02:31.034 LIB libspdk_ioat.a 00:02:31.034 SO libspdk_ioat.so.7.0 00:02:31.034 SYMLINK libspdk_dma.so 00:02:31.034 SYMLINK libspdk_ioat.so 00:02:31.295 LIB libspdk_vfio_user.a 00:02:31.295 SO libspdk_vfio_user.so.5.0 00:02:31.295 LIB libspdk_util.a 00:02:31.295 SYMLINK libspdk_vfio_user.so 00:02:31.295 SO libspdk_util.so.10.0 00:02:31.556 SYMLINK libspdk_util.so 00:02:31.556 LIB libspdk_trace_parser.a 00:02:31.556 SO libspdk_trace_parser.so.6.0 00:02:31.817 SYMLINK libspdk_trace_parser.so 00:02:31.817 CC lib/json/json_parse.o 00:02:31.817 CC lib/rdma_provider/common.o 00:02:31.817 CC lib/json/json_util.o 00:02:31.817 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:31.817 CC lib/json/json_write.o 00:02:31.817 CC lib/vmd/vmd.o 00:02:31.817 CC lib/vmd/led.o 00:02:31.817 CC lib/idxd/idxd.o 00:02:31.817 CC lib/idxd/idxd_user.o 00:02:31.817 CC lib/idxd/idxd_kernel.o 00:02:31.817 CC lib/conf/conf.o 00:02:31.817 CC lib/env_dpdk/env.o 00:02:31.817 CC lib/env_dpdk/memory.o 00:02:31.817 CC lib/rdma_utils/rdma_utils.o 00:02:31.817 CC lib/env_dpdk/pci.o 00:02:31.817 CC lib/env_dpdk/init.o 00:02:31.817 CC lib/env_dpdk/threads.o 00:02:31.817 CC lib/env_dpdk/pci_virtio.o 00:02:31.817 CC lib/env_dpdk/pci_ioat.o 00:02:31.817 CC lib/env_dpdk/pci_vmd.o 00:02:31.817 CC lib/env_dpdk/pci_idxd.o 00:02:31.817 CC lib/env_dpdk/pci_event.o 00:02:31.817 CC lib/env_dpdk/sigbus_handler.o 00:02:31.817 CC lib/env_dpdk/pci_dpdk.o 00:02:31.817 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:31.817 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:32.078 LIB libspdk_rdma_provider.a 00:02:32.078 LIB libspdk_conf.a 00:02:32.078 SO libspdk_rdma_provider.so.6.0 00:02:32.078 LIB libspdk_json.a 00:02:32.078 SO libspdk_conf.so.6.0 00:02:32.078 SO libspdk_json.so.6.0 00:02:32.078 LIB libspdk_rdma_utils.a 00:02:32.078 SYMLINK libspdk_rdma_provider.so 00:02:32.078 SO libspdk_rdma_utils.so.1.0 00:02:32.078 SYMLINK libspdk_conf.so 00:02:32.078 SYMLINK libspdk_json.so 00:02:32.339 SYMLINK libspdk_rdma_utils.so 00:02:32.339 LIB libspdk_idxd.a 00:02:32.339 SO libspdk_idxd.so.12.1 00:02:32.339 LIB libspdk_vmd.a 00:02:32.339 SO libspdk_vmd.so.6.0 00:02:32.339 SYMLINK libspdk_idxd.so 00:02:32.600 SYMLINK libspdk_vmd.so 00:02:32.600 CC lib/jsonrpc/jsonrpc_server.o 00:02:32.600 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:32.600 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:32.600 CC lib/jsonrpc/jsonrpc_client.o 00:02:32.861 LIB libspdk_jsonrpc.a 00:02:32.861 SO libspdk_jsonrpc.so.6.0 00:02:32.861 SYMLINK libspdk_jsonrpc.so 00:02:33.121 LIB libspdk_env_dpdk.a 00:02:33.121 SO libspdk_env_dpdk.so.15.1 00:02:33.121 SYMLINK libspdk_env_dpdk.so 00:02:33.382 CC lib/rpc/rpc.o 00:02:33.382 LIB libspdk_rpc.a 00:02:33.644 SO libspdk_rpc.so.6.0 00:02:33.644 SYMLINK libspdk_rpc.so 00:02:33.904 CC lib/keyring/keyring.o 00:02:33.904 CC lib/keyring/keyring_rpc.o 00:02:33.904 CC lib/notify/notify.o 00:02:33.904 CC lib/notify/notify_rpc.o 00:02:33.904 CC lib/trace/trace.o 00:02:33.904 CC lib/trace/trace_flags.o 00:02:33.904 CC lib/trace/trace_rpc.o 00:02:34.165 LIB libspdk_notify.a 00:02:34.165 SO libspdk_notify.so.6.0 00:02:34.165 LIB libspdk_keyring.a 00:02:34.165 LIB libspdk_trace.a 00:02:34.165 SO libspdk_keyring.so.2.0 00:02:34.165 SYMLINK libspdk_notify.so 00:02:34.165 SO libspdk_trace.so.11.0 00:02:34.426 SYMLINK libspdk_keyring.so 00:02:34.426 SYMLINK libspdk_trace.so 00:02:34.686 CC lib/thread/thread.o 00:02:34.686 CC lib/thread/iobuf.o 00:02:34.686 CC lib/sock/sock.o 00:02:34.686 CC lib/sock/sock_rpc.o 00:02:34.947 LIB libspdk_sock.a 00:02:35.208 SO libspdk_sock.so.10.0 00:02:35.208 SYMLINK libspdk_sock.so 00:02:35.469 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:35.469 CC lib/nvme/nvme_ctrlr.o 00:02:35.469 CC lib/nvme/nvme_ns.o 00:02:35.469 CC lib/nvme/nvme_fabric.o 00:02:35.469 CC lib/nvme/nvme_ns_cmd.o 00:02:35.469 CC lib/nvme/nvme_pcie_common.o 00:02:35.469 CC lib/nvme/nvme_pcie.o 00:02:35.469 CC lib/nvme/nvme_qpair.o 00:02:35.469 CC lib/nvme/nvme.o 00:02:35.469 CC lib/nvme/nvme_quirks.o 00:02:35.469 CC lib/nvme/nvme_transport.o 00:02:35.469 CC lib/nvme/nvme_discovery.o 00:02:35.469 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:35.469 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:35.469 CC lib/nvme/nvme_tcp.o 00:02:35.469 CC lib/nvme/nvme_opal.o 00:02:35.469 CC lib/nvme/nvme_poll_group.o 00:02:35.469 CC lib/nvme/nvme_io_msg.o 00:02:35.469 CC lib/nvme/nvme_zns.o 00:02:35.469 CC lib/nvme/nvme_stubs.o 00:02:35.469 CC lib/nvme/nvme_auth.o 00:02:35.469 CC lib/nvme/nvme_cuse.o 00:02:35.469 CC lib/nvme/nvme_vfio_user.o 00:02:35.469 CC lib/nvme/nvme_rdma.o 00:02:36.040 LIB libspdk_thread.a 00:02:36.040 SO libspdk_thread.so.11.0 00:02:36.040 SYMLINK libspdk_thread.so 00:02:36.301 CC lib/blob/request.o 00:02:36.301 CC lib/vfu_tgt/tgt_endpoint.o 00:02:36.301 CC lib/blob/blobstore.o 00:02:36.301 CC lib/accel/accel.o 00:02:36.301 CC lib/blob/zeroes.o 00:02:36.301 CC lib/blob/blob_bs_dev.o 00:02:36.301 CC lib/vfu_tgt/tgt_rpc.o 00:02:36.301 CC lib/accel/accel_rpc.o 00:02:36.301 CC lib/accel/accel_sw.o 00:02:36.301 CC lib/init/json_config.o 00:02:36.301 CC lib/init/subsystem.o 00:02:36.301 CC lib/init/subsystem_rpc.o 00:02:36.301 CC lib/init/rpc.o 00:02:36.301 CC lib/fsdev/fsdev.o 00:02:36.301 CC lib/fsdev/fsdev_rpc.o 00:02:36.301 CC lib/fsdev/fsdev_io.o 00:02:36.301 CC lib/virtio/virtio.o 00:02:36.301 CC lib/virtio/virtio_vhost_user.o 00:02:36.301 CC lib/virtio/virtio_vfio_user.o 00:02:36.301 CC lib/virtio/virtio_pci.o 00:02:36.562 LIB libspdk_init.a 00:02:36.562 SO libspdk_init.so.6.0 00:02:36.562 LIB libspdk_vfu_tgt.a 00:02:36.823 LIB libspdk_virtio.a 00:02:36.823 SO libspdk_vfu_tgt.so.3.0 00:02:36.823 SYMLINK libspdk_init.so 00:02:36.823 SO libspdk_virtio.so.7.0 00:02:36.823 SYMLINK libspdk_vfu_tgt.so 00:02:36.823 SYMLINK libspdk_virtio.so 00:02:37.085 LIB libspdk_fsdev.a 00:02:37.085 SO libspdk_fsdev.so.2.0 00:02:37.085 CC lib/event/app.o 00:02:37.085 CC lib/event/reactor.o 00:02:37.085 CC lib/event/log_rpc.o 00:02:37.085 CC lib/event/app_rpc.o 00:02:37.085 CC lib/event/scheduler_static.o 00:02:37.085 SYMLINK libspdk_fsdev.so 00:02:37.347 LIB libspdk_accel.a 00:02:37.347 SO libspdk_accel.so.16.0 00:02:37.347 LIB libspdk_nvme.a 00:02:37.347 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:37.347 SYMLINK libspdk_accel.so 00:02:37.608 LIB libspdk_event.a 00:02:37.608 SO libspdk_nvme.so.14.1 00:02:37.608 SO libspdk_event.so.14.0 00:02:37.608 SYMLINK libspdk_event.so 00:02:37.868 SYMLINK libspdk_nvme.so 00:02:37.868 CC lib/bdev/bdev.o 00:02:37.868 CC lib/bdev/bdev_rpc.o 00:02:37.868 CC lib/bdev/bdev_zone.o 00:02:37.868 CC lib/bdev/part.o 00:02:37.868 CC lib/bdev/scsi_nvme.o 00:02:38.130 LIB libspdk_fuse_dispatcher.a 00:02:38.130 SO libspdk_fuse_dispatcher.so.1.0 00:02:38.130 SYMLINK libspdk_fuse_dispatcher.so 00:02:39.074 LIB libspdk_blob.a 00:02:39.074 SO libspdk_blob.so.11.0 00:02:39.074 SYMLINK libspdk_blob.so 00:02:39.645 CC lib/blobfs/blobfs.o 00:02:39.646 CC lib/blobfs/tree.o 00:02:39.646 CC lib/lvol/lvol.o 00:02:40.217 LIB libspdk_bdev.a 00:02:40.217 SO libspdk_bdev.so.17.0 00:02:40.217 LIB libspdk_blobfs.a 00:02:40.217 SO libspdk_blobfs.so.10.0 00:02:40.217 SYMLINK libspdk_bdev.so 00:02:40.217 LIB libspdk_lvol.a 00:02:40.217 SYMLINK libspdk_blobfs.so 00:02:40.478 SO libspdk_lvol.so.10.0 00:02:40.478 SYMLINK libspdk_lvol.so 00:02:40.737 CC lib/scsi/dev.o 00:02:40.737 CC lib/scsi/lun.o 00:02:40.737 CC lib/nbd/nbd.o 00:02:40.737 CC lib/scsi/port.o 00:02:40.737 CC lib/ftl/ftl_core.o 00:02:40.738 CC lib/nbd/nbd_rpc.o 00:02:40.738 CC lib/scsi/scsi.o 00:02:40.738 CC lib/ftl/ftl_init.o 00:02:40.738 CC lib/scsi/scsi_bdev.o 00:02:40.738 CC lib/ftl/ftl_layout.o 00:02:40.738 CC lib/scsi/scsi_pr.o 00:02:40.738 CC lib/scsi/scsi_rpc.o 00:02:40.738 CC lib/ftl/ftl_debug.o 00:02:40.738 CC lib/scsi/task.o 00:02:40.738 CC lib/ftl/ftl_io.o 00:02:40.738 CC lib/ftl/ftl_sb.o 00:02:40.738 CC lib/ftl/ftl_l2p.o 00:02:40.738 CC lib/ftl/ftl_l2p_flat.o 00:02:40.738 CC lib/ublk/ublk.o 00:02:40.738 CC lib/ftl/ftl_nv_cache.o 00:02:40.738 CC lib/ublk/ublk_rpc.o 00:02:40.738 CC lib/nvmf/ctrlr.o 00:02:40.738 CC lib/ftl/ftl_band.o 00:02:40.738 CC lib/nvmf/ctrlr_discovery.o 00:02:40.738 CC lib/ftl/ftl_band_ops.o 00:02:40.738 CC lib/ftl/ftl_rq.o 00:02:40.738 CC lib/nvmf/ctrlr_bdev.o 00:02:40.738 CC lib/ftl/ftl_writer.o 00:02:40.738 CC lib/nvmf/subsystem.o 00:02:40.738 CC lib/nvmf/nvmf.o 00:02:40.738 CC lib/ftl/ftl_reloc.o 00:02:40.738 CC lib/nvmf/nvmf_rpc.o 00:02:40.738 CC lib/ftl/ftl_l2p_cache.o 00:02:40.738 CC lib/ftl/ftl_p2l.o 00:02:40.738 CC lib/nvmf/transport.o 00:02:40.738 CC lib/nvmf/tcp.o 00:02:40.738 CC lib/ftl/ftl_p2l_log.o 00:02:40.738 CC lib/nvmf/stubs.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt.o 00:02:40.738 CC lib/nvmf/mdns_server.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:40.738 CC lib/nvmf/vfio_user.o 00:02:40.738 CC lib/nvmf/rdma.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:40.738 CC lib/nvmf/auth.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:40.738 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:40.738 CC lib/ftl/utils/ftl_conf.o 00:02:40.738 CC lib/ftl/utils/ftl_mempool.o 00:02:40.738 CC lib/ftl/utils/ftl_md.o 00:02:40.738 CC lib/ftl/utils/ftl_bitmap.o 00:02:40.738 CC lib/ftl/utils/ftl_property.o 00:02:40.738 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:40.738 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:40.738 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:40.738 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:40.738 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:40.738 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:40.738 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:40.738 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:40.738 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:40.738 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:40.738 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:40.738 CC lib/ftl/base/ftl_base_dev.o 00:02:40.738 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:40.738 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:40.738 CC lib/ftl/base/ftl_base_bdev.o 00:02:40.738 CC lib/ftl/ftl_trace.o 00:02:41.309 LIB libspdk_nbd.a 00:02:41.309 SO libspdk_nbd.so.7.0 00:02:41.309 SYMLINK libspdk_nbd.so 00:02:41.309 LIB libspdk_scsi.a 00:02:41.309 SO libspdk_scsi.so.9.0 00:02:41.309 LIB libspdk_ublk.a 00:02:41.309 SYMLINK libspdk_scsi.so 00:02:41.309 SO libspdk_ublk.so.3.0 00:02:41.571 SYMLINK libspdk_ublk.so 00:02:41.571 LIB libspdk_ftl.a 00:02:41.833 CC lib/iscsi/conn.o 00:02:41.833 CC lib/vhost/vhost.o 00:02:41.833 CC lib/iscsi/iscsi.o 00:02:41.833 CC lib/iscsi/init_grp.o 00:02:41.833 CC lib/iscsi/param.o 00:02:41.833 CC lib/vhost/vhost_rpc.o 00:02:41.833 CC lib/vhost/vhost_scsi.o 00:02:41.833 CC lib/vhost/rte_vhost_user.o 00:02:41.834 CC lib/vhost/vhost_blk.o 00:02:41.834 CC lib/iscsi/portal_grp.o 00:02:41.834 CC lib/iscsi/tgt_node.o 00:02:41.834 CC lib/iscsi/iscsi_subsystem.o 00:02:41.834 CC lib/iscsi/iscsi_rpc.o 00:02:41.834 CC lib/iscsi/task.o 00:02:41.834 SO libspdk_ftl.so.9.0 00:02:42.095 SYMLINK libspdk_ftl.so 00:02:42.668 LIB libspdk_nvmf.a 00:02:42.668 SO libspdk_nvmf.so.20.0 00:02:42.668 LIB libspdk_vhost.a 00:02:42.668 SO libspdk_vhost.so.8.0 00:02:42.668 SYMLINK libspdk_nvmf.so 00:02:42.929 SYMLINK libspdk_vhost.so 00:02:42.929 LIB libspdk_iscsi.a 00:02:42.929 SO libspdk_iscsi.so.8.0 00:02:43.190 SYMLINK libspdk_iscsi.so 00:02:43.761 CC module/env_dpdk/env_dpdk_rpc.o 00:02:43.761 CC module/vfu_device/vfu_virtio.o 00:02:43.761 CC module/vfu_device/vfu_virtio_blk.o 00:02:43.761 CC module/vfu_device/vfu_virtio_scsi.o 00:02:43.761 CC module/vfu_device/vfu_virtio_rpc.o 00:02:43.761 CC module/vfu_device/vfu_virtio_fs.o 00:02:43.761 LIB libspdk_env_dpdk_rpc.a 00:02:43.761 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:43.761 CC module/accel/error/accel_error.o 00:02:43.761 CC module/accel/error/accel_error_rpc.o 00:02:43.761 CC module/scheduler/gscheduler/gscheduler.o 00:02:43.761 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:43.762 CC module/accel/iaa/accel_iaa.o 00:02:43.762 CC module/accel/iaa/accel_iaa_rpc.o 00:02:43.762 CC module/fsdev/aio/fsdev_aio.o 00:02:43.762 CC module/keyring/file/keyring.o 00:02:43.762 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:43.762 CC module/blob/bdev/blob_bdev.o 00:02:43.762 CC module/keyring/file/keyring_rpc.o 00:02:43.762 CC module/fsdev/aio/linux_aio_mgr.o 00:02:43.762 CC module/accel/ioat/accel_ioat.o 00:02:43.762 CC module/accel/dsa/accel_dsa.o 00:02:43.762 CC module/keyring/linux/keyring.o 00:02:43.762 CC module/accel/ioat/accel_ioat_rpc.o 00:02:43.762 CC module/sock/posix/posix.o 00:02:43.762 CC module/accel/dsa/accel_dsa_rpc.o 00:02:43.762 CC module/keyring/linux/keyring_rpc.o 00:02:43.762 SO libspdk_env_dpdk_rpc.so.6.0 00:02:44.022 SYMLINK libspdk_env_dpdk_rpc.so 00:02:44.022 LIB libspdk_keyring_file.a 00:02:44.022 LIB libspdk_keyring_linux.a 00:02:44.022 LIB libspdk_scheduler_gscheduler.a 00:02:44.022 LIB libspdk_accel_error.a 00:02:44.022 LIB libspdk_scheduler_dpdk_governor.a 00:02:44.022 LIB libspdk_scheduler_dynamic.a 00:02:44.022 SO libspdk_keyring_file.so.2.0 00:02:44.022 LIB libspdk_accel_ioat.a 00:02:44.022 SO libspdk_keyring_linux.so.1.0 00:02:44.022 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:44.022 SO libspdk_scheduler_gscheduler.so.4.0 00:02:44.022 SO libspdk_accel_error.so.2.0 00:02:44.022 SO libspdk_scheduler_dynamic.so.4.0 00:02:44.022 LIB libspdk_accel_iaa.a 00:02:44.022 SO libspdk_accel_ioat.so.6.0 00:02:44.022 SYMLINK libspdk_keyring_file.so 00:02:44.022 SO libspdk_accel_iaa.so.3.0 00:02:44.022 SYMLINK libspdk_keyring_linux.so 00:02:44.022 SYMLINK libspdk_scheduler_gscheduler.so 00:02:44.022 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:44.022 SYMLINK libspdk_accel_error.so 00:02:44.022 LIB libspdk_blob_bdev.a 00:02:44.022 SYMLINK libspdk_scheduler_dynamic.so 00:02:44.022 LIB libspdk_accel_dsa.a 00:02:44.022 SYMLINK libspdk_accel_ioat.so 00:02:44.022 SO libspdk_accel_dsa.so.5.0 00:02:44.283 SO libspdk_blob_bdev.so.11.0 00:02:44.283 SYMLINK libspdk_accel_iaa.so 00:02:44.283 LIB libspdk_vfu_device.a 00:02:44.283 SYMLINK libspdk_blob_bdev.so 00:02:44.283 SYMLINK libspdk_accel_dsa.so 00:02:44.283 SO libspdk_vfu_device.so.3.0 00:02:44.283 SYMLINK libspdk_vfu_device.so 00:02:44.283 LIB libspdk_fsdev_aio.a 00:02:44.545 SO libspdk_fsdev_aio.so.1.0 00:02:44.545 LIB libspdk_sock_posix.a 00:02:44.545 SO libspdk_sock_posix.so.6.0 00:02:44.545 SYMLINK libspdk_fsdev_aio.so 00:02:44.545 SYMLINK libspdk_sock_posix.so 00:02:44.807 CC module/bdev/nvme/bdev_nvme.o 00:02:44.807 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:44.807 CC module/bdev/nvme/bdev_mdns_client.o 00:02:44.807 CC module/bdev/nvme/nvme_rpc.o 00:02:44.807 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:44.807 CC module/bdev/nvme/vbdev_opal.o 00:02:44.807 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:44.807 CC module/bdev/gpt/gpt.o 00:02:44.807 CC module/bdev/gpt/vbdev_gpt.o 00:02:44.807 CC module/bdev/delay/vbdev_delay.o 00:02:44.807 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:44.807 CC module/bdev/split/vbdev_split.o 00:02:44.807 CC module/bdev/error/vbdev_error.o 00:02:44.807 CC module/bdev/error/vbdev_error_rpc.o 00:02:44.807 CC module/bdev/split/vbdev_split_rpc.o 00:02:44.807 CC module/blobfs/bdev/blobfs_bdev.o 00:02:44.807 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:44.807 CC module/bdev/malloc/bdev_malloc.o 00:02:44.807 CC module/bdev/passthru/vbdev_passthru.o 00:02:44.807 CC module/bdev/iscsi/bdev_iscsi.o 00:02:44.807 CC module/bdev/ftl/bdev_ftl.o 00:02:44.807 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:44.807 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:44.807 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:44.807 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:44.807 CC module/bdev/lvol/vbdev_lvol.o 00:02:44.807 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:44.807 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:44.807 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:44.807 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:44.807 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:44.807 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:44.807 CC module/bdev/aio/bdev_aio.o 00:02:44.807 CC module/bdev/aio/bdev_aio_rpc.o 00:02:44.807 CC module/bdev/raid/bdev_raid_rpc.o 00:02:44.807 CC module/bdev/raid/bdev_raid.o 00:02:44.807 CC module/bdev/raid/bdev_raid_sb.o 00:02:44.807 CC module/bdev/raid/raid0.o 00:02:44.807 CC module/bdev/raid/raid1.o 00:02:44.807 CC module/bdev/null/bdev_null.o 00:02:44.807 CC module/bdev/null/bdev_null_rpc.o 00:02:44.807 CC module/bdev/raid/concat.o 00:02:45.068 LIB libspdk_blobfs_bdev.a 00:02:45.068 SO libspdk_blobfs_bdev.so.6.0 00:02:45.068 LIB libspdk_bdev_split.a 00:02:45.068 LIB libspdk_bdev_gpt.a 00:02:45.068 LIB libspdk_bdev_error.a 00:02:45.068 SO libspdk_bdev_split.so.6.0 00:02:45.068 SO libspdk_bdev_gpt.so.6.0 00:02:45.068 SYMLINK libspdk_blobfs_bdev.so 00:02:45.068 LIB libspdk_bdev_null.a 00:02:45.068 LIB libspdk_bdev_passthru.a 00:02:45.068 SO libspdk_bdev_error.so.6.0 00:02:45.068 LIB libspdk_bdev_ftl.a 00:02:45.068 SO libspdk_bdev_passthru.so.6.0 00:02:45.068 SO libspdk_bdev_null.so.6.0 00:02:45.068 SYMLINK libspdk_bdev_gpt.so 00:02:45.068 SYMLINK libspdk_bdev_split.so 00:02:45.068 LIB libspdk_bdev_delay.a 00:02:45.068 SO libspdk_bdev_ftl.so.6.0 00:02:45.068 LIB libspdk_bdev_zone_block.a 00:02:45.068 LIB libspdk_bdev_aio.a 00:02:45.068 LIB libspdk_bdev_malloc.a 00:02:45.329 SO libspdk_bdev_delay.so.6.0 00:02:45.329 LIB libspdk_bdev_iscsi.a 00:02:45.329 SYMLINK libspdk_bdev_error.so 00:02:45.329 SO libspdk_bdev_zone_block.so.6.0 00:02:45.329 SO libspdk_bdev_aio.so.6.0 00:02:45.329 SYMLINK libspdk_bdev_null.so 00:02:45.329 SYMLINK libspdk_bdev_passthru.so 00:02:45.329 SO libspdk_bdev_malloc.so.6.0 00:02:45.329 SO libspdk_bdev_iscsi.so.6.0 00:02:45.329 SYMLINK libspdk_bdev_ftl.so 00:02:45.329 SYMLINK libspdk_bdev_delay.so 00:02:45.329 SYMLINK libspdk_bdev_zone_block.so 00:02:45.329 LIB libspdk_bdev_lvol.a 00:02:45.329 SYMLINK libspdk_bdev_aio.so 00:02:45.329 SYMLINK libspdk_bdev_malloc.so 00:02:45.329 LIB libspdk_bdev_virtio.a 00:02:45.329 SYMLINK libspdk_bdev_iscsi.so 00:02:45.329 SO libspdk_bdev_lvol.so.6.0 00:02:45.329 SO libspdk_bdev_virtio.so.6.0 00:02:45.329 SYMLINK libspdk_bdev_lvol.so 00:02:45.329 SYMLINK libspdk_bdev_virtio.so 00:02:45.902 LIB libspdk_bdev_raid.a 00:02:45.902 SO libspdk_bdev_raid.so.6.0 00:02:45.902 SYMLINK libspdk_bdev_raid.so 00:02:47.288 LIB libspdk_bdev_nvme.a 00:02:47.288 SO libspdk_bdev_nvme.so.7.1 00:02:47.288 SYMLINK libspdk_bdev_nvme.so 00:02:47.860 CC module/event/subsystems/sock/sock.o 00:02:47.860 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:47.860 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:47.860 CC module/event/subsystems/iobuf/iobuf.o 00:02:47.860 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:47.860 CC module/event/subsystems/vmd/vmd.o 00:02:47.860 CC module/event/subsystems/keyring/keyring.o 00:02:47.860 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:47.860 CC module/event/subsystems/scheduler/scheduler.o 00:02:47.860 CC module/event/subsystems/fsdev/fsdev.o 00:02:48.122 LIB libspdk_event_keyring.a 00:02:48.122 LIB libspdk_event_sock.a 00:02:48.122 LIB libspdk_event_scheduler.a 00:02:48.122 LIB libspdk_event_vfu_tgt.a 00:02:48.122 LIB libspdk_event_vhost_blk.a 00:02:48.122 LIB libspdk_event_fsdev.a 00:02:48.122 LIB libspdk_event_vmd.a 00:02:48.122 SO libspdk_event_sock.so.5.0 00:02:48.122 SO libspdk_event_scheduler.so.4.0 00:02:48.122 SO libspdk_event_keyring.so.1.0 00:02:48.122 LIB libspdk_event_iobuf.a 00:02:48.122 SO libspdk_event_vhost_blk.so.3.0 00:02:48.122 SO libspdk_event_vfu_tgt.so.3.0 00:02:48.122 SO libspdk_event_fsdev.so.1.0 00:02:48.122 SO libspdk_event_vmd.so.6.0 00:02:48.122 SO libspdk_event_iobuf.so.3.0 00:02:48.122 SYMLINK libspdk_event_scheduler.so 00:02:48.122 SYMLINK libspdk_event_sock.so 00:02:48.122 SYMLINK libspdk_event_keyring.so 00:02:48.122 SYMLINK libspdk_event_vhost_blk.so 00:02:48.122 SYMLINK libspdk_event_fsdev.so 00:02:48.122 SYMLINK libspdk_event_vfu_tgt.so 00:02:48.122 SYMLINK libspdk_event_vmd.so 00:02:48.122 SYMLINK libspdk_event_iobuf.so 00:02:48.693 CC module/event/subsystems/accel/accel.o 00:02:48.693 LIB libspdk_event_accel.a 00:02:48.693 SO libspdk_event_accel.so.6.0 00:02:48.693 SYMLINK libspdk_event_accel.so 00:02:49.265 CC module/event/subsystems/bdev/bdev.o 00:02:49.265 LIB libspdk_event_bdev.a 00:02:49.265 SO libspdk_event_bdev.so.6.0 00:02:49.526 SYMLINK libspdk_event_bdev.so 00:02:49.787 CC module/event/subsystems/nbd/nbd.o 00:02:49.787 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:49.787 CC module/event/subsystems/scsi/scsi.o 00:02:49.787 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:49.787 CC module/event/subsystems/ublk/ublk.o 00:02:49.787 LIB libspdk_event_nbd.a 00:02:50.049 LIB libspdk_event_ublk.a 00:02:50.049 SO libspdk_event_nbd.so.6.0 00:02:50.049 LIB libspdk_event_scsi.a 00:02:50.049 SO libspdk_event_ublk.so.3.0 00:02:50.049 SO libspdk_event_scsi.so.6.0 00:02:50.049 SYMLINK libspdk_event_nbd.so 00:02:50.049 LIB libspdk_event_nvmf.a 00:02:50.049 SYMLINK libspdk_event_ublk.so 00:02:50.049 SYMLINK libspdk_event_scsi.so 00:02:50.049 SO libspdk_event_nvmf.so.6.0 00:02:50.049 SYMLINK libspdk_event_nvmf.so 00:02:50.310 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:50.310 CC module/event/subsystems/iscsi/iscsi.o 00:02:50.571 LIB libspdk_event_vhost_scsi.a 00:02:50.571 SO libspdk_event_vhost_scsi.so.3.0 00:02:50.571 LIB libspdk_event_iscsi.a 00:02:50.571 SO libspdk_event_iscsi.so.6.0 00:02:50.571 SYMLINK libspdk_event_vhost_scsi.so 00:02:50.831 SYMLINK libspdk_event_iscsi.so 00:02:50.831 SO libspdk.so.6.0 00:02:50.831 SYMLINK libspdk.so 00:02:51.404 CC app/spdk_nvme_perf/perf.o 00:02:51.404 CC app/trace_record/trace_record.o 00:02:51.404 CXX app/trace/trace.o 00:02:51.404 CC app/spdk_lspci/spdk_lspci.o 00:02:51.404 CC app/spdk_top/spdk_top.o 00:02:51.404 CC test/rpc_client/rpc_client_test.o 00:02:51.404 CC app/spdk_nvme_discover/discovery_aer.o 00:02:51.404 TEST_HEADER include/spdk/accel.h 00:02:51.404 CC app/spdk_nvme_identify/identify.o 00:02:51.404 TEST_HEADER include/spdk/assert.h 00:02:51.404 TEST_HEADER include/spdk/accel_module.h 00:02:51.404 TEST_HEADER include/spdk/barrier.h 00:02:51.404 TEST_HEADER include/spdk/bdev.h 00:02:51.404 TEST_HEADER include/spdk/bdev_module.h 00:02:51.404 TEST_HEADER include/spdk/base64.h 00:02:51.404 TEST_HEADER include/spdk/bdev_zone.h 00:02:51.404 TEST_HEADER include/spdk/bit_array.h 00:02:51.404 CC app/spdk_dd/spdk_dd.o 00:02:51.404 TEST_HEADER include/spdk/bit_pool.h 00:02:51.404 TEST_HEADER include/spdk/blob_bdev.h 00:02:51.404 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:51.404 TEST_HEADER include/spdk/blobfs.h 00:02:51.404 TEST_HEADER include/spdk/blob.h 00:02:51.404 TEST_HEADER include/spdk/conf.h 00:02:51.404 TEST_HEADER include/spdk/config.h 00:02:51.404 TEST_HEADER include/spdk/cpuset.h 00:02:51.404 TEST_HEADER include/spdk/crc16.h 00:02:51.404 TEST_HEADER include/spdk/crc32.h 00:02:51.404 TEST_HEADER include/spdk/dif.h 00:02:51.404 TEST_HEADER include/spdk/crc64.h 00:02:51.404 TEST_HEADER include/spdk/endian.h 00:02:51.404 TEST_HEADER include/spdk/dma.h 00:02:51.404 CC app/nvmf_tgt/nvmf_main.o 00:02:51.404 TEST_HEADER include/spdk/env_dpdk.h 00:02:51.404 TEST_HEADER include/spdk/env.h 00:02:51.404 TEST_HEADER include/spdk/event.h 00:02:51.404 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:51.404 TEST_HEADER include/spdk/fd_group.h 00:02:51.404 TEST_HEADER include/spdk/fd.h 00:02:51.404 TEST_HEADER include/spdk/file.h 00:02:51.404 TEST_HEADER include/spdk/fsdev.h 00:02:51.404 TEST_HEADER include/spdk/fsdev_module.h 00:02:51.404 TEST_HEADER include/spdk/ftl.h 00:02:51.404 TEST_HEADER include/spdk/gpt_spec.h 00:02:51.404 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:51.404 TEST_HEADER include/spdk/hexlify.h 00:02:51.404 TEST_HEADER include/spdk/histogram_data.h 00:02:51.404 TEST_HEADER include/spdk/idxd.h 00:02:51.404 CC app/iscsi_tgt/iscsi_tgt.o 00:02:51.404 TEST_HEADER include/spdk/idxd_spec.h 00:02:51.404 TEST_HEADER include/spdk/init.h 00:02:51.404 TEST_HEADER include/spdk/ioat.h 00:02:51.404 TEST_HEADER include/spdk/ioat_spec.h 00:02:51.404 TEST_HEADER include/spdk/iscsi_spec.h 00:02:51.404 TEST_HEADER include/spdk/json.h 00:02:51.404 TEST_HEADER include/spdk/jsonrpc.h 00:02:51.404 TEST_HEADER include/spdk/keyring.h 00:02:51.404 TEST_HEADER include/spdk/keyring_module.h 00:02:51.404 TEST_HEADER include/spdk/likely.h 00:02:51.404 TEST_HEADER include/spdk/log.h 00:02:51.404 TEST_HEADER include/spdk/lvol.h 00:02:51.404 TEST_HEADER include/spdk/md5.h 00:02:51.404 TEST_HEADER include/spdk/memory.h 00:02:51.404 TEST_HEADER include/spdk/mmio.h 00:02:51.404 TEST_HEADER include/spdk/nbd.h 00:02:51.404 TEST_HEADER include/spdk/net.h 00:02:51.404 TEST_HEADER include/spdk/nvme.h 00:02:51.404 TEST_HEADER include/spdk/notify.h 00:02:51.404 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:51.404 TEST_HEADER include/spdk/nvme_intel.h 00:02:51.404 CC app/spdk_tgt/spdk_tgt.o 00:02:51.404 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:51.404 TEST_HEADER include/spdk/nvme_spec.h 00:02:51.404 TEST_HEADER include/spdk/nvme_zns.h 00:02:51.404 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:51.404 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:51.404 TEST_HEADER include/spdk/nvmf.h 00:02:51.404 TEST_HEADER include/spdk/nvmf_spec.h 00:02:51.404 TEST_HEADER include/spdk/opal_spec.h 00:02:51.404 TEST_HEADER include/spdk/nvmf_transport.h 00:02:51.404 TEST_HEADER include/spdk/opal.h 00:02:51.404 TEST_HEADER include/spdk/pci_ids.h 00:02:51.404 TEST_HEADER include/spdk/pipe.h 00:02:51.404 TEST_HEADER include/spdk/reduce.h 00:02:51.404 TEST_HEADER include/spdk/queue.h 00:02:51.404 TEST_HEADER include/spdk/rpc.h 00:02:51.404 TEST_HEADER include/spdk/scheduler.h 00:02:51.404 TEST_HEADER include/spdk/scsi_spec.h 00:02:51.404 TEST_HEADER include/spdk/scsi.h 00:02:51.404 TEST_HEADER include/spdk/stdinc.h 00:02:51.404 TEST_HEADER include/spdk/sock.h 00:02:51.404 TEST_HEADER include/spdk/string.h 00:02:51.404 TEST_HEADER include/spdk/trace_parser.h 00:02:51.404 TEST_HEADER include/spdk/thread.h 00:02:51.404 TEST_HEADER include/spdk/trace.h 00:02:51.404 TEST_HEADER include/spdk/tree.h 00:02:51.404 TEST_HEADER include/spdk/ublk.h 00:02:51.404 TEST_HEADER include/spdk/util.h 00:02:51.404 TEST_HEADER include/spdk/uuid.h 00:02:51.404 TEST_HEADER include/spdk/version.h 00:02:51.404 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:51.404 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:51.404 TEST_HEADER include/spdk/vhost.h 00:02:51.404 TEST_HEADER include/spdk/vmd.h 00:02:51.404 TEST_HEADER include/spdk/xor.h 00:02:51.404 TEST_HEADER include/spdk/zipf.h 00:02:51.404 CXX test/cpp_headers/accel.o 00:02:51.404 CXX test/cpp_headers/accel_module.o 00:02:51.404 CXX test/cpp_headers/assert.o 00:02:51.404 CXX test/cpp_headers/barrier.o 00:02:51.404 CXX test/cpp_headers/base64.o 00:02:51.404 CXX test/cpp_headers/bdev.o 00:02:51.404 CXX test/cpp_headers/bdev_module.o 00:02:51.404 CXX test/cpp_headers/bdev_zone.o 00:02:51.404 CXX test/cpp_headers/bit_array.o 00:02:51.404 CXX test/cpp_headers/bit_pool.o 00:02:51.404 CXX test/cpp_headers/blob_bdev.o 00:02:51.404 CXX test/cpp_headers/blobfs_bdev.o 00:02:51.404 CXX test/cpp_headers/blobfs.o 00:02:51.404 CXX test/cpp_headers/blob.o 00:02:51.404 CXX test/cpp_headers/conf.o 00:02:51.404 CXX test/cpp_headers/config.o 00:02:51.404 CXX test/cpp_headers/cpuset.o 00:02:51.404 CXX test/cpp_headers/crc16.o 00:02:51.404 CXX test/cpp_headers/crc32.o 00:02:51.404 CXX test/cpp_headers/crc64.o 00:02:51.404 CXX test/cpp_headers/dif.o 00:02:51.404 CXX test/cpp_headers/dma.o 00:02:51.404 CXX test/cpp_headers/endian.o 00:02:51.404 CXX test/cpp_headers/env_dpdk.o 00:02:51.404 CXX test/cpp_headers/env.o 00:02:51.404 CXX test/cpp_headers/event.o 00:02:51.404 CXX test/cpp_headers/fd_group.o 00:02:51.404 CXX test/cpp_headers/fd.o 00:02:51.404 CXX test/cpp_headers/file.o 00:02:51.404 CXX test/cpp_headers/fsdev.o 00:02:51.404 CXX test/cpp_headers/ftl.o 00:02:51.404 CXX test/cpp_headers/fsdev_module.o 00:02:51.404 CXX test/cpp_headers/fuse_dispatcher.o 00:02:51.404 CXX test/cpp_headers/hexlify.o 00:02:51.404 CXX test/cpp_headers/gpt_spec.o 00:02:51.404 CXX test/cpp_headers/histogram_data.o 00:02:51.404 CXX test/cpp_headers/init.o 00:02:51.404 CXX test/cpp_headers/idxd.o 00:02:51.404 CXX test/cpp_headers/ioat.o 00:02:51.404 CXX test/cpp_headers/idxd_spec.o 00:02:51.404 CXX test/cpp_headers/iscsi_spec.o 00:02:51.404 CXX test/cpp_headers/ioat_spec.o 00:02:51.404 CXX test/cpp_headers/jsonrpc.o 00:02:51.404 CXX test/cpp_headers/keyring.o 00:02:51.404 CXX test/cpp_headers/json.o 00:02:51.404 CXX test/cpp_headers/keyring_module.o 00:02:51.404 CXX test/cpp_headers/likely.o 00:02:51.404 CC test/app/histogram_perf/histogram_perf.o 00:02:51.404 CXX test/cpp_headers/lvol.o 00:02:51.405 CXX test/cpp_headers/memory.o 00:02:51.405 CXX test/cpp_headers/log.o 00:02:51.405 CXX test/cpp_headers/md5.o 00:02:51.405 CXX test/cpp_headers/net.o 00:02:51.405 CXX test/cpp_headers/mmio.o 00:02:51.405 CXX test/cpp_headers/nbd.o 00:02:51.405 CXX test/cpp_headers/notify.o 00:02:51.405 CXX test/cpp_headers/nvme_intel.o 00:02:51.405 CXX test/cpp_headers/nvme_spec.o 00:02:51.405 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:51.405 CC examples/util/zipf/zipf.o 00:02:51.405 CXX test/cpp_headers/nvme_ocssd.o 00:02:51.405 CXX test/cpp_headers/nvme.o 00:02:51.405 CXX test/cpp_headers/nvmf_cmd.o 00:02:51.405 CXX test/cpp_headers/nvme_zns.o 00:02:51.405 CXX test/cpp_headers/nvmf_spec.o 00:02:51.405 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:51.405 CC test/app/jsoncat/jsoncat.o 00:02:51.405 LINK spdk_lspci 00:02:51.405 CXX test/cpp_headers/nvmf.o 00:02:51.405 CXX test/cpp_headers/nvmf_transport.o 00:02:51.405 CC examples/ioat/verify/verify.o 00:02:51.405 CC test/thread/poller_perf/poller_perf.o 00:02:51.405 CXX test/cpp_headers/opal.o 00:02:51.405 CXX test/cpp_headers/opal_spec.o 00:02:51.405 CC examples/ioat/perf/perf.o 00:02:51.405 CC app/fio/nvme/fio_plugin.o 00:02:51.405 CXX test/cpp_headers/pci_ids.o 00:02:51.405 CXX test/cpp_headers/pipe.o 00:02:51.405 CXX test/cpp_headers/rpc.o 00:02:51.405 CXX test/cpp_headers/scheduler.o 00:02:51.405 CC test/app/stub/stub.o 00:02:51.405 CXX test/cpp_headers/reduce.o 00:02:51.405 CC test/env/vtophys/vtophys.o 00:02:51.405 CXX test/cpp_headers/queue.o 00:02:51.405 CC test/env/pci/pci_ut.o 00:02:51.668 CXX test/cpp_headers/scsi.o 00:02:51.668 CXX test/cpp_headers/sock.o 00:02:51.668 CXX test/cpp_headers/scsi_spec.o 00:02:51.668 CXX test/cpp_headers/stdinc.o 00:02:51.668 CXX test/cpp_headers/thread.o 00:02:51.668 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:51.668 CXX test/cpp_headers/string.o 00:02:51.668 CC test/env/memory/memory_ut.o 00:02:51.668 CXX test/cpp_headers/trace.o 00:02:51.668 CXX test/cpp_headers/ublk.o 00:02:51.668 CXX test/cpp_headers/tree.o 00:02:51.668 CXX test/cpp_headers/trace_parser.o 00:02:51.668 CXX test/cpp_headers/util.o 00:02:51.668 CXX test/cpp_headers/uuid.o 00:02:51.668 CXX test/cpp_headers/version.o 00:02:51.668 CXX test/cpp_headers/vfio_user_spec.o 00:02:51.668 CXX test/cpp_headers/vfio_user_pci.o 00:02:51.668 CXX test/cpp_headers/vhost.o 00:02:51.668 CXX test/cpp_headers/vmd.o 00:02:51.668 CXX test/cpp_headers/xor.o 00:02:51.668 CXX test/cpp_headers/zipf.o 00:02:51.668 CC test/app/bdev_svc/bdev_svc.o 00:02:51.668 CC test/dma/test_dma/test_dma.o 00:02:51.668 CC app/fio/bdev/fio_plugin.o 00:02:51.668 LINK spdk_nvme_discover 00:02:51.668 LINK rpc_client_test 00:02:51.668 LINK nvmf_tgt 00:02:51.668 LINK interrupt_tgt 00:02:51.668 LINK iscsi_tgt 00:02:51.668 LINK spdk_trace_record 00:02:51.928 LINK spdk_tgt 00:02:51.928 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:51.928 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:51.928 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:51.928 CC test/env/mem_callbacks/mem_callbacks.o 00:02:51.928 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:51.928 LINK vtophys 00:02:51.928 LINK poller_perf 00:02:51.928 LINK histogram_perf 00:02:51.928 LINK env_dpdk_post_init 00:02:52.188 LINK zipf 00:02:52.188 LINK jsoncat 00:02:52.188 LINK spdk_dd 00:02:52.188 LINK spdk_trace 00:02:52.188 LINK bdev_svc 00:02:52.188 LINK stub 00:02:52.188 LINK ioat_perf 00:02:52.188 LINK verify 00:02:52.448 LINK spdk_nvme_perf 00:02:52.448 LINK test_dma 00:02:52.448 LINK pci_ut 00:02:52.448 LINK nvme_fuzz 00:02:52.448 CC app/vhost/vhost.o 00:02:52.448 CC examples/vmd/lsvmd/lsvmd.o 00:02:52.448 CC examples/vmd/led/led.o 00:02:52.448 CC examples/sock/hello_world/hello_sock.o 00:02:52.448 LINK vhost_fuzz 00:02:52.448 CC examples/thread/thread/thread_ex.o 00:02:52.448 CC examples/idxd/perf/perf.o 00:02:52.448 LINK spdk_nvme 00:02:52.448 CC test/event/reactor/reactor.o 00:02:52.708 CC test/event/reactor_perf/reactor_perf.o 00:02:52.708 CC test/event/event_perf/event_perf.o 00:02:52.708 CC test/event/app_repeat/app_repeat.o 00:02:52.708 LINK spdk_bdev 00:02:52.708 LINK spdk_nvme_identify 00:02:52.708 CC test/event/scheduler/scheduler.o 00:02:52.708 LINK spdk_top 00:02:52.708 LINK lsvmd 00:02:52.708 LINK led 00:02:52.708 LINK mem_callbacks 00:02:52.708 LINK reactor 00:02:52.708 LINK reactor_perf 00:02:52.708 LINK vhost 00:02:52.708 LINK hello_sock 00:02:52.708 LINK idxd_perf 00:02:52.708 LINK event_perf 00:02:52.708 LINK app_repeat 00:02:52.708 LINK thread 00:02:52.968 LINK scheduler 00:02:52.968 CC test/nvme/err_injection/err_injection.o 00:02:52.968 CC test/nvme/sgl/sgl.o 00:02:52.968 CC test/nvme/connect_stress/connect_stress.o 00:02:52.968 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:52.968 CC test/nvme/reserve/reserve.o 00:02:52.968 CC test/nvme/fused_ordering/fused_ordering.o 00:02:52.968 CC test/nvme/simple_copy/simple_copy.o 00:02:52.968 CC test/nvme/boot_partition/boot_partition.o 00:02:52.968 CC test/nvme/compliance/nvme_compliance.o 00:02:52.968 CC test/nvme/reset/reset.o 00:02:52.968 CC test/nvme/overhead/overhead.o 00:02:52.968 CC test/nvme/e2edp/nvme_dp.o 00:02:52.968 CC test/nvme/aer/aer.o 00:02:52.968 CC test/nvme/startup/startup.o 00:02:52.968 CC test/nvme/fdp/fdp.o 00:02:52.968 CC test/nvme/cuse/cuse.o 00:02:52.968 CC test/blobfs/mkfs/mkfs.o 00:02:52.968 CC test/accel/dif/dif.o 00:02:53.229 LINK memory_ut 00:02:53.229 CC test/lvol/esnap/esnap.o 00:02:53.229 LINK connect_stress 00:02:53.229 LINK err_injection 00:02:53.229 LINK boot_partition 00:02:53.229 LINK startup 00:02:53.229 LINK fused_ordering 00:02:53.229 LINK nvme_dp 00:02:53.229 LINK doorbell_aers 00:02:53.229 LINK reserve 00:02:53.229 LINK sgl 00:02:53.229 LINK simple_copy 00:02:53.229 CC examples/nvme/hello_world/hello_world.o 00:02:53.229 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:53.229 LINK overhead 00:02:53.229 CC examples/nvme/arbitration/arbitration.o 00:02:53.229 CC examples/nvme/abort/abort.o 00:02:53.229 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:53.229 CC examples/nvme/hotplug/hotplug.o 00:02:53.229 CC examples/nvme/reconnect/reconnect.o 00:02:53.229 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:53.229 LINK reset 00:02:53.229 LINK mkfs 00:02:53.229 LINK nvme_compliance 00:02:53.229 LINK aer 00:02:53.229 LINK fdp 00:02:53.490 CC examples/accel/perf/accel_perf.o 00:02:53.490 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:53.490 CC examples/blob/hello_world/hello_blob.o 00:02:53.490 LINK pmr_persistence 00:02:53.490 CC examples/blob/cli/blobcli.o 00:02:53.490 LINK cmb_copy 00:02:53.490 LINK hello_world 00:02:53.490 LINK hotplug 00:02:53.490 LINK iscsi_fuzz 00:02:53.490 LINK arbitration 00:02:53.490 LINK reconnect 00:02:53.490 LINK abort 00:02:53.752 LINK dif 00:02:53.752 LINK hello_blob 00:02:53.752 LINK nvme_manage 00:02:53.752 LINK hello_fsdev 00:02:53.752 LINK accel_perf 00:02:54.013 LINK blobcli 00:02:54.274 LINK cuse 00:02:54.274 CC test/bdev/bdevio/bdevio.o 00:02:54.274 CC examples/bdev/hello_world/hello_bdev.o 00:02:54.274 CC examples/bdev/bdevperf/bdevperf.o 00:02:54.536 LINK hello_bdev 00:02:54.842 LINK bdevio 00:02:55.188 LINK bdevperf 00:02:55.761 CC examples/nvmf/nvmf/nvmf.o 00:02:56.022 LINK nvmf 00:02:57.937 LINK esnap 00:02:57.937 00:02:57.937 real 0m54.374s 00:02:57.937 user 7m49.704s 00:02:57.937 sys 4m20.399s 00:02:57.937 04:14:11 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:57.937 04:14:11 make -- common/autotest_common.sh@10 -- $ set +x 00:02:57.937 ************************************ 00:02:57.937 END TEST make 00:02:57.937 ************************************ 00:02:57.937 04:14:11 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:57.937 04:14:11 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:57.937 04:14:11 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:57.937 04:14:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.937 04:14:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:57.937 04:14:11 -- pm/common@44 -- $ pid=2665774 00:02:57.937 04:14:11 -- pm/common@50 -- $ kill -TERM 2665774 00:02:57.937 04:14:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.937 04:14:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:57.937 04:14:11 -- pm/common@44 -- $ pid=2665775 00:02:57.937 04:14:11 -- pm/common@50 -- $ kill -TERM 2665775 00:02:57.937 04:14:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.937 04:14:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:57.937 04:14:11 -- pm/common@44 -- $ pid=2665777 00:02:57.937 04:14:11 -- pm/common@50 -- $ kill -TERM 2665777 00:02:57.937 04:14:11 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:57.937 04:14:11 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:57.937 04:14:11 -- pm/common@44 -- $ pid=2665801 00:02:57.937 04:14:11 -- pm/common@50 -- $ sudo -E kill -TERM 2665801 00:02:57.937 04:14:11 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:57.937 04:14:11 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:58.199 04:14:11 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:02:58.199 04:14:11 -- common/autotest_common.sh@1691 -- # lcov --version 00:02:58.200 04:14:11 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:02:58.200 04:14:11 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:02:58.200 04:14:11 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:58.200 04:14:11 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:58.200 04:14:11 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:58.200 04:14:11 -- scripts/common.sh@336 -- # IFS=.-: 00:02:58.200 04:14:11 -- scripts/common.sh@336 -- # read -ra ver1 00:02:58.200 04:14:11 -- scripts/common.sh@337 -- # IFS=.-: 00:02:58.200 04:14:11 -- scripts/common.sh@337 -- # read -ra ver2 00:02:58.200 04:14:11 -- scripts/common.sh@338 -- # local 'op=<' 00:02:58.200 04:14:11 -- scripts/common.sh@340 -- # ver1_l=2 00:02:58.200 04:14:11 -- scripts/common.sh@341 -- # ver2_l=1 00:02:58.200 04:14:11 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:58.200 04:14:11 -- scripts/common.sh@344 -- # case "$op" in 00:02:58.200 04:14:11 -- scripts/common.sh@345 -- # : 1 00:02:58.200 04:14:11 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:58.200 04:14:11 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:58.200 04:14:11 -- scripts/common.sh@365 -- # decimal 1 00:02:58.200 04:14:11 -- scripts/common.sh@353 -- # local d=1 00:02:58.200 04:14:11 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:58.200 04:14:11 -- scripts/common.sh@355 -- # echo 1 00:02:58.200 04:14:11 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:58.200 04:14:11 -- scripts/common.sh@366 -- # decimal 2 00:02:58.200 04:14:11 -- scripts/common.sh@353 -- # local d=2 00:02:58.200 04:14:11 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:58.200 04:14:11 -- scripts/common.sh@355 -- # echo 2 00:02:58.200 04:14:11 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:58.200 04:14:11 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:58.200 04:14:11 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:58.200 04:14:11 -- scripts/common.sh@368 -- # return 0 00:02:58.200 04:14:11 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:58.200 04:14:11 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:02:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.200 --rc genhtml_branch_coverage=1 00:02:58.200 --rc genhtml_function_coverage=1 00:02:58.200 --rc genhtml_legend=1 00:02:58.200 --rc geninfo_all_blocks=1 00:02:58.200 --rc geninfo_unexecuted_blocks=1 00:02:58.200 00:02:58.200 ' 00:02:58.200 04:14:11 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:02:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.200 --rc genhtml_branch_coverage=1 00:02:58.200 --rc genhtml_function_coverage=1 00:02:58.200 --rc genhtml_legend=1 00:02:58.200 --rc geninfo_all_blocks=1 00:02:58.200 --rc geninfo_unexecuted_blocks=1 00:02:58.200 00:02:58.200 ' 00:02:58.200 04:14:11 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:02:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.200 --rc genhtml_branch_coverage=1 00:02:58.200 --rc genhtml_function_coverage=1 00:02:58.200 --rc genhtml_legend=1 00:02:58.200 --rc geninfo_all_blocks=1 00:02:58.200 --rc geninfo_unexecuted_blocks=1 00:02:58.200 00:02:58.200 ' 00:02:58.200 04:14:11 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:02:58.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:58.200 --rc genhtml_branch_coverage=1 00:02:58.200 --rc genhtml_function_coverage=1 00:02:58.200 --rc genhtml_legend=1 00:02:58.200 --rc geninfo_all_blocks=1 00:02:58.200 --rc geninfo_unexecuted_blocks=1 00:02:58.200 00:02:58.200 ' 00:02:58.200 04:14:11 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:58.200 04:14:11 -- nvmf/common.sh@7 -- # uname -s 00:02:58.200 04:14:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:58.200 04:14:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:58.200 04:14:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:58.200 04:14:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:58.200 04:14:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:58.200 04:14:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:58.200 04:14:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:58.200 04:14:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:58.200 04:14:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:58.200 04:14:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:58.200 04:14:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:58.200 04:14:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:02:58.200 04:14:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:58.200 04:14:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:58.200 04:14:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:58.200 04:14:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:58.200 04:14:11 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:58.200 04:14:11 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:58.200 04:14:11 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:58.200 04:14:11 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:58.200 04:14:11 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:58.200 04:14:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.200 04:14:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.200 04:14:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.200 04:14:11 -- paths/export.sh@5 -- # export PATH 00:02:58.200 04:14:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:58.200 04:14:11 -- nvmf/common.sh@51 -- # : 0 00:02:58.200 04:14:11 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:02:58.200 04:14:11 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:02:58.200 04:14:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:58.200 04:14:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:58.200 04:14:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:58.200 04:14:11 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:02:58.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:02:58.200 04:14:11 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:02:58.200 04:14:11 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:02:58.200 04:14:11 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:02:58.200 04:14:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:58.200 04:14:11 -- spdk/autotest.sh@32 -- # uname -s 00:02:58.200 04:14:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:58.200 04:14:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:58.200 04:14:11 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:58.200 04:14:11 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:58.200 04:14:11 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:58.201 04:14:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:58.201 04:14:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:58.201 04:14:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:58.201 04:14:11 -- spdk/autotest.sh@48 -- # udevadm_pid=2731256 00:02:58.201 04:14:11 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:58.201 04:14:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:58.201 04:14:11 -- pm/common@17 -- # local monitor 00:02:58.201 04:14:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.201 04:14:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.201 04:14:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.201 04:14:11 -- pm/common@21 -- # date +%s 00:02:58.201 04:14:11 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:58.201 04:14:11 -- pm/common@21 -- # date +%s 00:02:58.201 04:14:11 -- pm/common@21 -- # date +%s 00:02:58.201 04:14:11 -- pm/common@25 -- # sleep 1 00:02:58.201 04:14:11 -- pm/common@21 -- # date +%s 00:02:58.201 04:14:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730776451 00:02:58.201 04:14:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730776451 00:02:58.201 04:14:11 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730776451 00:02:58.201 04:14:11 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1730776451 00:02:58.201 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730776451_collect-cpu-temp.pm.log 00:02:58.201 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730776451_collect-vmstat.pm.log 00:02:58.201 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730776451_collect-cpu-load.pm.log 00:02:58.201 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1730776451_collect-bmc-pm.bmc.pm.log 00:02:59.144 04:14:12 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:59.144 04:14:12 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:59.144 04:14:12 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:59.144 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:02:59.144 04:14:12 -- spdk/autotest.sh@59 -- # create_test_list 00:02:59.144 04:14:12 -- common/autotest_common.sh@750 -- # xtrace_disable 00:02:59.144 04:14:12 -- common/autotest_common.sh@10 -- # set +x 00:02:59.405 04:14:12 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:59.405 04:14:12 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:59.405 04:14:12 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:59.405 04:14:12 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:59.405 04:14:12 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:59.405 04:14:12 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:59.405 04:14:12 -- common/autotest_common.sh@1455 -- # uname 00:02:59.405 04:14:12 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:59.405 04:14:12 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:59.405 04:14:12 -- common/autotest_common.sh@1475 -- # uname 00:02:59.405 04:14:12 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:59.405 04:14:12 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:59.405 04:14:12 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:59.405 lcov: LCOV version 1.15 00:02:59.405 04:14:12 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:21.378 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:21.378 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:29.514 04:14:42 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:29.514 04:14:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:29.514 04:14:42 -- common/autotest_common.sh@10 -- # set +x 00:03:29.514 04:14:42 -- spdk/autotest.sh@78 -- # rm -f 00:03:29.514 04:14:42 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.814 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:32.814 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:32.814 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:33.074 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:33.074 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:33.074 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:33.074 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:33.334 04:14:46 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:33.334 04:14:46 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:33.334 04:14:46 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:33.334 04:14:46 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:33.334 04:14:46 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:33.334 04:14:46 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:33.335 04:14:46 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:33.335 04:14:46 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:33.335 04:14:46 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:33.335 04:14:46 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:33.335 04:14:46 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:33.335 04:14:46 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:33.335 04:14:46 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:33.335 04:14:46 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:33.335 04:14:46 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:33.335 No valid GPT data, bailing 00:03:33.335 04:14:46 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:33.335 04:14:46 -- scripts/common.sh@394 -- # pt= 00:03:33.335 04:14:46 -- scripts/common.sh@395 -- # return 1 00:03:33.335 04:14:46 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:33.335 1+0 records in 00:03:33.335 1+0 records out 00:03:33.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00155786 s, 673 MB/s 00:03:33.335 04:14:46 -- spdk/autotest.sh@105 -- # sync 00:03:33.335 04:14:46 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:33.335 04:14:46 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:33.335 04:14:46 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:43.332 04:14:55 -- spdk/autotest.sh@111 -- # uname -s 00:03:43.332 04:14:55 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:43.332 04:14:55 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:43.332 04:14:55 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.244 Hugepages 00:03:45.244 node hugesize free / total 00:03:45.244 node0 1048576kB 0 / 0 00:03:45.244 node0 2048kB 0 / 0 00:03:45.244 node1 1048576kB 0 / 0 00:03:45.244 node1 2048kB 0 / 0 00:03:45.244 00:03:45.244 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.244 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:03:45.244 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:03:45.244 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:03:45.244 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:03:45.244 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:03:45.244 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:03:45.244 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:03:45.244 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:03:45.244 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:03:45.244 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:03:45.244 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:03:45.245 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:03:45.245 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:03:45.245 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:03:45.245 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:03:45.245 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:03:45.245 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:03:45.245 04:14:58 -- spdk/autotest.sh@117 -- # uname -s 00:03:45.245 04:14:58 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:45.245 04:14:58 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:45.245 04:14:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:48.566 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:48.566 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:50.479 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:50.739 04:15:04 -- common/autotest_common.sh@1515 -- # sleep 1 00:03:51.681 04:15:05 -- common/autotest_common.sh@1516 -- # bdfs=() 00:03:51.681 04:15:05 -- common/autotest_common.sh@1516 -- # local bdfs 00:03:51.681 04:15:05 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:03:51.681 04:15:05 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:03:51.681 04:15:05 -- common/autotest_common.sh@1496 -- # bdfs=() 00:03:51.681 04:15:05 -- common/autotest_common.sh@1496 -- # local bdfs 00:03:51.681 04:15:05 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:51.681 04:15:05 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:51.681 04:15:05 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:03:51.681 04:15:05 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:03:51.681 04:15:05 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:03:51.681 04:15:05 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.981 Waiting for block devices as requested 00:03:54.981 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:54.981 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:54.981 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:54.981 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:54.981 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:54.981 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:54.981 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:55.240 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:55.240 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:03:55.501 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:03:55.501 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:03:55.501 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:03:55.501 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:03:55.762 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:03:55.762 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:03:55.762 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:03:56.022 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:03:56.283 04:15:09 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:03:56.283 04:15:09 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:03:56.283 04:15:09 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:03:56.283 04:15:09 -- common/autotest_common.sh@1485 -- # grep 0000:65:00.0/nvme/nvme 00:03:56.283 04:15:09 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:56.283 04:15:09 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:03:56.283 04:15:09 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:03:56.283 04:15:09 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:03:56.283 04:15:09 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:03:56.283 04:15:09 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:03:56.283 04:15:09 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:03:56.283 04:15:09 -- common/autotest_common.sh@1529 -- # grep oacs 00:03:56.283 04:15:09 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:03:56.283 04:15:09 -- common/autotest_common.sh@1529 -- # oacs=' 0x5f' 00:03:56.283 04:15:09 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:03:56.283 04:15:09 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:03:56.283 04:15:09 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:03:56.283 04:15:09 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:03:56.283 04:15:09 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:03:56.283 04:15:09 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:03:56.283 04:15:09 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:03:56.283 04:15:09 -- common/autotest_common.sh@1541 -- # continue 00:03:56.283 04:15:09 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:56.283 04:15:09 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:56.283 04:15:09 -- common/autotest_common.sh@10 -- # set +x 00:03:56.283 04:15:09 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:56.283 04:15:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:56.283 04:15:09 -- common/autotest_common.sh@10 -- # set +x 00:03:56.283 04:15:09 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.588 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:59.588 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:59.588 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:59.589 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:59.589 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:59.589 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:59.589 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:59.589 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:59.589 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:59.589 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:59.589 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:59.589 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:59.849 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:59.849 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:59.849 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:59.849 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:59.849 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:00.110 04:15:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:00.110 04:15:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:00.110 04:15:13 -- common/autotest_common.sh@10 -- # set +x 00:04:00.110 04:15:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:00.110 04:15:13 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:00.110 04:15:13 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:00.110 04:15:13 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:00.110 04:15:13 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:00.110 04:15:13 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:00.110 04:15:13 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:00.110 04:15:13 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:00.110 04:15:13 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:00.110 04:15:13 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:00.110 04:15:13 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:00.110 04:15:13 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:00.110 04:15:13 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:00.371 04:15:13 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:04:00.371 04:15:13 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:04:00.371 04:15:13 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:00.371 04:15:13 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:04:00.371 04:15:13 -- common/autotest_common.sh@1564 -- # device=0xa80a 00:04:00.371 04:15:13 -- common/autotest_common.sh@1565 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:04:00.371 04:15:13 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:00.371 04:15:13 -- common/autotest_common.sh@1570 -- # return 0 00:04:00.371 04:15:13 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:00.371 04:15:13 -- common/autotest_common.sh@1578 -- # return 0 00:04:00.371 04:15:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:00.371 04:15:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:00.371 04:15:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:00.371 04:15:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:00.371 04:15:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:00.371 04:15:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.371 04:15:13 -- common/autotest_common.sh@10 -- # set +x 00:04:00.371 04:15:13 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:00.371 04:15:13 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.371 04:15:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:00.371 04:15:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:00.371 04:15:13 -- common/autotest_common.sh@10 -- # set +x 00:04:00.371 ************************************ 00:04:00.371 START TEST env 00:04:00.371 ************************************ 00:04:00.371 04:15:13 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.371 * Looking for test storage... 00:04:00.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:00.371 04:15:13 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:00.371 04:15:13 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:00.371 04:15:13 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:00.632 04:15:14 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:00.632 04:15:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:00.632 04:15:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:00.632 04:15:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:00.632 04:15:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:00.632 04:15:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:00.632 04:15:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:00.632 04:15:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:00.632 04:15:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:00.632 04:15:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:00.632 04:15:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:00.632 04:15:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:00.632 04:15:14 env -- scripts/common.sh@344 -- # case "$op" in 00:04:00.632 04:15:14 env -- scripts/common.sh@345 -- # : 1 00:04:00.632 04:15:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:00.632 04:15:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:00.632 04:15:14 env -- scripts/common.sh@365 -- # decimal 1 00:04:00.632 04:15:14 env -- scripts/common.sh@353 -- # local d=1 00:04:00.632 04:15:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:00.632 04:15:14 env -- scripts/common.sh@355 -- # echo 1 00:04:00.632 04:15:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:00.632 04:15:14 env -- scripts/common.sh@366 -- # decimal 2 00:04:00.632 04:15:14 env -- scripts/common.sh@353 -- # local d=2 00:04:00.632 04:15:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:00.632 04:15:14 env -- scripts/common.sh@355 -- # echo 2 00:04:00.632 04:15:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:00.632 04:15:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:00.632 04:15:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:00.632 04:15:14 env -- scripts/common.sh@368 -- # return 0 00:04:00.632 04:15:14 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:00.632 04:15:14 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:00.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.632 --rc genhtml_branch_coverage=1 00:04:00.632 --rc genhtml_function_coverage=1 00:04:00.632 --rc genhtml_legend=1 00:04:00.632 --rc geninfo_all_blocks=1 00:04:00.632 --rc geninfo_unexecuted_blocks=1 00:04:00.632 00:04:00.632 ' 00:04:00.632 04:15:14 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:00.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.632 --rc genhtml_branch_coverage=1 00:04:00.632 --rc genhtml_function_coverage=1 00:04:00.632 --rc genhtml_legend=1 00:04:00.632 --rc geninfo_all_blocks=1 00:04:00.632 --rc geninfo_unexecuted_blocks=1 00:04:00.632 00:04:00.632 ' 00:04:00.632 04:15:14 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:00.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.632 --rc genhtml_branch_coverage=1 00:04:00.632 --rc genhtml_function_coverage=1 00:04:00.632 --rc genhtml_legend=1 00:04:00.632 --rc geninfo_all_blocks=1 00:04:00.632 --rc geninfo_unexecuted_blocks=1 00:04:00.632 00:04:00.632 ' 00:04:00.632 04:15:14 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:00.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:00.633 --rc genhtml_branch_coverage=1 00:04:00.633 --rc genhtml_function_coverage=1 00:04:00.633 --rc genhtml_legend=1 00:04:00.633 --rc geninfo_all_blocks=1 00:04:00.633 --rc geninfo_unexecuted_blocks=1 00:04:00.633 00:04:00.633 ' 00:04:00.633 04:15:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.633 04:15:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:00.633 04:15:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:00.633 04:15:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.633 ************************************ 00:04:00.633 START TEST env_memory 00:04:00.633 ************************************ 00:04:00.633 04:15:14 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.633 00:04:00.633 00:04:00.633 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.633 http://cunit.sourceforge.net/ 00:04:00.633 00:04:00.633 00:04:00.633 Suite: memory 00:04:00.633 Test: alloc and free memory map ...[2024-11-05 04:15:14.122744] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:00.633 passed 00:04:00.633 Test: mem map translation ...[2024-11-05 04:15:14.148373] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:00.633 [2024-11-05 04:15:14.148400] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:00.633 [2024-11-05 04:15:14.148447] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:00.633 [2024-11-05 04:15:14.148454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:00.633 passed 00:04:00.633 Test: mem map registration ...[2024-11-05 04:15:14.203791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:00.633 [2024-11-05 04:15:14.203818] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:00.633 passed 00:04:00.894 Test: mem map adjacent registrations ...passed 00:04:00.894 00:04:00.894 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.894 suites 1 1 n/a 0 0 00:04:00.894 tests 4 4 4 0 0 00:04:00.894 asserts 152 152 152 0 n/a 00:04:00.894 00:04:00.894 Elapsed time = 0.191 seconds 00:04:00.894 00:04:00.894 real 0m0.206s 00:04:00.894 user 0m0.193s 00:04:00.894 sys 0m0.012s 00:04:00.894 04:15:14 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:00.894 04:15:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:00.894 ************************************ 00:04:00.894 END TEST env_memory 00:04:00.894 ************************************ 00:04:00.894 04:15:14 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:00.894 04:15:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:00.894 04:15:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:00.894 04:15:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.894 ************************************ 00:04:00.894 START TEST env_vtophys 00:04:00.895 ************************************ 00:04:00.895 04:15:14 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:00.895 EAL: lib.eal log level changed from notice to debug 00:04:00.895 EAL: Detected lcore 0 as core 0 on socket 0 00:04:00.895 EAL: Detected lcore 1 as core 1 on socket 0 00:04:00.895 EAL: Detected lcore 2 as core 2 on socket 0 00:04:00.895 EAL: Detected lcore 3 as core 3 on socket 0 00:04:00.895 EAL: Detected lcore 4 as core 4 on socket 0 00:04:00.895 EAL: Detected lcore 5 as core 5 on socket 0 00:04:00.895 EAL: Detected lcore 6 as core 6 on socket 0 00:04:00.895 EAL: Detected lcore 7 as core 7 on socket 0 00:04:00.895 EAL: Detected lcore 8 as core 8 on socket 0 00:04:00.895 EAL: Detected lcore 9 as core 9 on socket 0 00:04:00.895 EAL: Detected lcore 10 as core 10 on socket 0 00:04:00.895 EAL: Detected lcore 11 as core 11 on socket 0 00:04:00.895 EAL: Detected lcore 12 as core 12 on socket 0 00:04:00.895 EAL: Detected lcore 13 as core 13 on socket 0 00:04:00.895 EAL: Detected lcore 14 as core 14 on socket 0 00:04:00.895 EAL: Detected lcore 15 as core 15 on socket 0 00:04:00.895 EAL: Detected lcore 16 as core 16 on socket 0 00:04:00.895 EAL: Detected lcore 17 as core 17 on socket 0 00:04:00.895 EAL: Detected lcore 18 as core 18 on socket 0 00:04:00.895 EAL: Detected lcore 19 as core 19 on socket 0 00:04:00.895 EAL: Detected lcore 20 as core 20 on socket 0 00:04:00.895 EAL: Detected lcore 21 as core 21 on socket 0 00:04:00.895 EAL: Detected lcore 22 as core 22 on socket 0 00:04:00.895 EAL: Detected lcore 23 as core 23 on socket 0 00:04:00.895 EAL: Detected lcore 24 as core 24 on socket 0 00:04:00.895 EAL: Detected lcore 25 as core 25 on socket 0 00:04:00.895 EAL: Detected lcore 26 as core 26 on socket 0 00:04:00.895 EAL: Detected lcore 27 as core 27 on socket 0 00:04:00.895 EAL: Detected lcore 28 as core 28 on socket 0 00:04:00.895 EAL: Detected lcore 29 as core 29 on socket 0 00:04:00.895 EAL: Detected lcore 30 as core 30 on socket 0 00:04:00.895 EAL: Detected lcore 31 as core 31 on socket 0 00:04:00.895 EAL: Detected lcore 32 as core 32 on socket 0 00:04:00.895 EAL: Detected lcore 33 as core 33 on socket 0 00:04:00.895 EAL: Detected lcore 34 as core 34 on socket 0 00:04:00.895 EAL: Detected lcore 35 as core 35 on socket 0 00:04:00.895 EAL: Detected lcore 36 as core 0 on socket 1 00:04:00.895 EAL: Detected lcore 37 as core 1 on socket 1 00:04:00.895 EAL: Detected lcore 38 as core 2 on socket 1 00:04:00.895 EAL: Detected lcore 39 as core 3 on socket 1 00:04:00.895 EAL: Detected lcore 40 as core 4 on socket 1 00:04:00.895 EAL: Detected lcore 41 as core 5 on socket 1 00:04:00.895 EAL: Detected lcore 42 as core 6 on socket 1 00:04:00.895 EAL: Detected lcore 43 as core 7 on socket 1 00:04:00.895 EAL: Detected lcore 44 as core 8 on socket 1 00:04:00.895 EAL: Detected lcore 45 as core 9 on socket 1 00:04:00.895 EAL: Detected lcore 46 as core 10 on socket 1 00:04:00.895 EAL: Detected lcore 47 as core 11 on socket 1 00:04:00.895 EAL: Detected lcore 48 as core 12 on socket 1 00:04:00.895 EAL: Detected lcore 49 as core 13 on socket 1 00:04:00.895 EAL: Detected lcore 50 as core 14 on socket 1 00:04:00.895 EAL: Detected lcore 51 as core 15 on socket 1 00:04:00.895 EAL: Detected lcore 52 as core 16 on socket 1 00:04:00.895 EAL: Detected lcore 53 as core 17 on socket 1 00:04:00.895 EAL: Detected lcore 54 as core 18 on socket 1 00:04:00.895 EAL: Detected lcore 55 as core 19 on socket 1 00:04:00.895 EAL: Detected lcore 56 as core 20 on socket 1 00:04:00.895 EAL: Detected lcore 57 as core 21 on socket 1 00:04:00.895 EAL: Detected lcore 58 as core 22 on socket 1 00:04:00.895 EAL: Detected lcore 59 as core 23 on socket 1 00:04:00.895 EAL: Detected lcore 60 as core 24 on socket 1 00:04:00.895 EAL: Detected lcore 61 as core 25 on socket 1 00:04:00.895 EAL: Detected lcore 62 as core 26 on socket 1 00:04:00.895 EAL: Detected lcore 63 as core 27 on socket 1 00:04:00.895 EAL: Detected lcore 64 as core 28 on socket 1 00:04:00.895 EAL: Detected lcore 65 as core 29 on socket 1 00:04:00.895 EAL: Detected lcore 66 as core 30 on socket 1 00:04:00.895 EAL: Detected lcore 67 as core 31 on socket 1 00:04:00.895 EAL: Detected lcore 68 as core 32 on socket 1 00:04:00.895 EAL: Detected lcore 69 as core 33 on socket 1 00:04:00.895 EAL: Detected lcore 70 as core 34 on socket 1 00:04:00.895 EAL: Detected lcore 71 as core 35 on socket 1 00:04:00.895 EAL: Detected lcore 72 as core 0 on socket 0 00:04:00.895 EAL: Detected lcore 73 as core 1 on socket 0 00:04:00.895 EAL: Detected lcore 74 as core 2 on socket 0 00:04:00.895 EAL: Detected lcore 75 as core 3 on socket 0 00:04:00.895 EAL: Detected lcore 76 as core 4 on socket 0 00:04:00.895 EAL: Detected lcore 77 as core 5 on socket 0 00:04:00.895 EAL: Detected lcore 78 as core 6 on socket 0 00:04:00.895 EAL: Detected lcore 79 as core 7 on socket 0 00:04:00.895 EAL: Detected lcore 80 as core 8 on socket 0 00:04:00.895 EAL: Detected lcore 81 as core 9 on socket 0 00:04:00.895 EAL: Detected lcore 82 as core 10 on socket 0 00:04:00.895 EAL: Detected lcore 83 as core 11 on socket 0 00:04:00.895 EAL: Detected lcore 84 as core 12 on socket 0 00:04:00.895 EAL: Detected lcore 85 as core 13 on socket 0 00:04:00.895 EAL: Detected lcore 86 as core 14 on socket 0 00:04:00.895 EAL: Detected lcore 87 as core 15 on socket 0 00:04:00.895 EAL: Detected lcore 88 as core 16 on socket 0 00:04:00.895 EAL: Detected lcore 89 as core 17 on socket 0 00:04:00.895 EAL: Detected lcore 90 as core 18 on socket 0 00:04:00.895 EAL: Detected lcore 91 as core 19 on socket 0 00:04:00.895 EAL: Detected lcore 92 as core 20 on socket 0 00:04:00.895 EAL: Detected lcore 93 as core 21 on socket 0 00:04:00.895 EAL: Detected lcore 94 as core 22 on socket 0 00:04:00.895 EAL: Detected lcore 95 as core 23 on socket 0 00:04:00.895 EAL: Detected lcore 96 as core 24 on socket 0 00:04:00.895 EAL: Detected lcore 97 as core 25 on socket 0 00:04:00.895 EAL: Detected lcore 98 as core 26 on socket 0 00:04:00.895 EAL: Detected lcore 99 as core 27 on socket 0 00:04:00.895 EAL: Detected lcore 100 as core 28 on socket 0 00:04:00.895 EAL: Detected lcore 101 as core 29 on socket 0 00:04:00.895 EAL: Detected lcore 102 as core 30 on socket 0 00:04:00.895 EAL: Detected lcore 103 as core 31 on socket 0 00:04:00.895 EAL: Detected lcore 104 as core 32 on socket 0 00:04:00.895 EAL: Detected lcore 105 as core 33 on socket 0 00:04:00.895 EAL: Detected lcore 106 as core 34 on socket 0 00:04:00.895 EAL: Detected lcore 107 as core 35 on socket 0 00:04:00.895 EAL: Detected lcore 108 as core 0 on socket 1 00:04:00.895 EAL: Detected lcore 109 as core 1 on socket 1 00:04:00.895 EAL: Detected lcore 110 as core 2 on socket 1 00:04:00.895 EAL: Detected lcore 111 as core 3 on socket 1 00:04:00.895 EAL: Detected lcore 112 as core 4 on socket 1 00:04:00.895 EAL: Detected lcore 113 as core 5 on socket 1 00:04:00.895 EAL: Detected lcore 114 as core 6 on socket 1 00:04:00.895 EAL: Detected lcore 115 as core 7 on socket 1 00:04:00.895 EAL: Detected lcore 116 as core 8 on socket 1 00:04:00.895 EAL: Detected lcore 117 as core 9 on socket 1 00:04:00.895 EAL: Detected lcore 118 as core 10 on socket 1 00:04:00.895 EAL: Detected lcore 119 as core 11 on socket 1 00:04:00.895 EAL: Detected lcore 120 as core 12 on socket 1 00:04:00.895 EAL: Detected lcore 121 as core 13 on socket 1 00:04:00.895 EAL: Detected lcore 122 as core 14 on socket 1 00:04:00.895 EAL: Detected lcore 123 as core 15 on socket 1 00:04:00.895 EAL: Detected lcore 124 as core 16 on socket 1 00:04:00.895 EAL: Detected lcore 125 as core 17 on socket 1 00:04:00.895 EAL: Detected lcore 126 as core 18 on socket 1 00:04:00.895 EAL: Detected lcore 127 as core 19 on socket 1 00:04:00.895 EAL: Skipped lcore 128 as core 20 on socket 1 00:04:00.895 EAL: Skipped lcore 129 as core 21 on socket 1 00:04:00.895 EAL: Skipped lcore 130 as core 22 on socket 1 00:04:00.895 EAL: Skipped lcore 131 as core 23 on socket 1 00:04:00.895 EAL: Skipped lcore 132 as core 24 on socket 1 00:04:00.895 EAL: Skipped lcore 133 as core 25 on socket 1 00:04:00.895 EAL: Skipped lcore 134 as core 26 on socket 1 00:04:00.895 EAL: Skipped lcore 135 as core 27 on socket 1 00:04:00.895 EAL: Skipped lcore 136 as core 28 on socket 1 00:04:00.895 EAL: Skipped lcore 137 as core 29 on socket 1 00:04:00.895 EAL: Skipped lcore 138 as core 30 on socket 1 00:04:00.895 EAL: Skipped lcore 139 as core 31 on socket 1 00:04:00.895 EAL: Skipped lcore 140 as core 32 on socket 1 00:04:00.895 EAL: Skipped lcore 141 as core 33 on socket 1 00:04:00.895 EAL: Skipped lcore 142 as core 34 on socket 1 00:04:00.895 EAL: Skipped lcore 143 as core 35 on socket 1 00:04:00.895 EAL: Maximum logical cores by configuration: 128 00:04:00.895 EAL: Detected CPU lcores: 128 00:04:00.895 EAL: Detected NUMA nodes: 2 00:04:00.895 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:00.895 EAL: Detected shared linkage of DPDK 00:04:00.895 EAL: No shared files mode enabled, IPC will be disabled 00:04:00.895 EAL: Bus pci wants IOVA as 'DC' 00:04:00.895 EAL: Buses did not request a specific IOVA mode. 00:04:00.895 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:00.895 EAL: Selected IOVA mode 'VA' 00:04:00.895 EAL: Probing VFIO support... 00:04:00.895 EAL: IOMMU type 1 (Type 1) is supported 00:04:00.895 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:00.895 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:00.895 EAL: VFIO support initialized 00:04:00.895 EAL: Ask a virtual area of 0x2e000 bytes 00:04:00.895 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:00.895 EAL: Setting up physically contiguous memory... 00:04:00.895 EAL: Setting maximum number of open files to 524288 00:04:00.895 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:00.895 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:00.895 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:00.895 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.895 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:00.895 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.895 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.895 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:00.895 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:00.895 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.895 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:00.895 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.895 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.895 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:00.895 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:00.895 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.895 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:00.895 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.895 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.896 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:00.896 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:00.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.896 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:00.896 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.896 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:00.896 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:00.896 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:00.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.896 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:00.896 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.896 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:00.896 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:00.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.896 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:00.896 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.896 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:00.896 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:00.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.896 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:00.896 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.896 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:00.896 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:00.896 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.896 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:00.896 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.896 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.896 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:00.896 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:00.896 EAL: Hugepages will be freed exactly as allocated. 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: TSC frequency is ~2400000 KHz 00:04:00.896 EAL: Main lcore 0 is ready (tid=7f6e513eca00;cpuset=[0]) 00:04:00.896 EAL: Trying to obtain current memory policy. 00:04:00.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.896 EAL: Restoring previous memory policy: 0 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was expanded by 2MB 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:00.896 EAL: Mem event callback 'spdk:(nil)' registered 00:04:00.896 00:04:00.896 00:04:00.896 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.896 http://cunit.sourceforge.net/ 00:04:00.896 00:04:00.896 00:04:00.896 Suite: components_suite 00:04:00.896 Test: vtophys_malloc_test ...passed 00:04:00.896 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:00.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.896 EAL: Restoring previous memory policy: 4 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was expanded by 4MB 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was shrunk by 4MB 00:04:00.896 EAL: Trying to obtain current memory policy. 00:04:00.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.896 EAL: Restoring previous memory policy: 4 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was expanded by 6MB 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was shrunk by 6MB 00:04:00.896 EAL: Trying to obtain current memory policy. 00:04:00.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.896 EAL: Restoring previous memory policy: 4 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was expanded by 10MB 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was shrunk by 10MB 00:04:00.896 EAL: Trying to obtain current memory policy. 00:04:00.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.896 EAL: Restoring previous memory policy: 4 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was expanded by 18MB 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was shrunk by 18MB 00:04:00.896 EAL: Trying to obtain current memory policy. 00:04:00.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.896 EAL: Restoring previous memory policy: 4 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was expanded by 34MB 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was shrunk by 34MB 00:04:00.896 EAL: Trying to obtain current memory policy. 00:04:00.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.896 EAL: Restoring previous memory policy: 4 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was expanded by 66MB 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was shrunk by 66MB 00:04:00.896 EAL: Trying to obtain current memory policy. 00:04:00.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.896 EAL: Restoring previous memory policy: 4 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was expanded by 130MB 00:04:00.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.896 EAL: request: mp_malloc_sync 00:04:00.896 EAL: No shared files mode enabled, IPC is disabled 00:04:00.896 EAL: Heap on socket 0 was shrunk by 130MB 00:04:00.896 EAL: Trying to obtain current memory policy. 00:04:00.896 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.157 EAL: Restoring previous memory policy: 4 00:04:01.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.157 EAL: request: mp_malloc_sync 00:04:01.157 EAL: No shared files mode enabled, IPC is disabled 00:04:01.157 EAL: Heap on socket 0 was expanded by 258MB 00:04:01.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.157 EAL: request: mp_malloc_sync 00:04:01.157 EAL: No shared files mode enabled, IPC is disabled 00:04:01.157 EAL: Heap on socket 0 was shrunk by 258MB 00:04:01.157 EAL: Trying to obtain current memory policy. 00:04:01.157 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.157 EAL: Restoring previous memory policy: 4 00:04:01.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.157 EAL: request: mp_malloc_sync 00:04:01.157 EAL: No shared files mode enabled, IPC is disabled 00:04:01.157 EAL: Heap on socket 0 was expanded by 514MB 00:04:01.157 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.417 EAL: request: mp_malloc_sync 00:04:01.417 EAL: No shared files mode enabled, IPC is disabled 00:04:01.417 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.417 EAL: Trying to obtain current memory policy. 00:04:01.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.417 EAL: Restoring previous memory policy: 4 00:04:01.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.417 EAL: request: mp_malloc_sync 00:04:01.417 EAL: No shared files mode enabled, IPC is disabled 00:04:01.417 EAL: Heap on socket 0 was expanded by 1026MB 00:04:01.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.678 EAL: request: mp_malloc_sync 00:04:01.678 EAL: No shared files mode enabled, IPC is disabled 00:04:01.678 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:01.678 passed 00:04:01.678 00:04:01.678 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.678 suites 1 1 n/a 0 0 00:04:01.678 tests 2 2 2 0 0 00:04:01.678 asserts 497 497 497 0 n/a 00:04:01.678 00:04:01.678 Elapsed time = 0.650 seconds 00:04:01.678 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.678 EAL: request: mp_malloc_sync 00:04:01.678 EAL: No shared files mode enabled, IPC is disabled 00:04:01.678 EAL: Heap on socket 0 was shrunk by 2MB 00:04:01.678 EAL: No shared files mode enabled, IPC is disabled 00:04:01.678 EAL: No shared files mode enabled, IPC is disabled 00:04:01.678 EAL: No shared files mode enabled, IPC is disabled 00:04:01.678 00:04:01.678 real 0m0.786s 00:04:01.678 user 0m0.412s 00:04:01.678 sys 0m0.341s 00:04:01.678 04:15:15 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:01.678 04:15:15 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:01.678 ************************************ 00:04:01.678 END TEST env_vtophys 00:04:01.678 ************************************ 00:04:01.678 04:15:15 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.678 04:15:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:01.678 04:15:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:01.678 04:15:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.678 ************************************ 00:04:01.678 START TEST env_pci 00:04:01.678 ************************************ 00:04:01.678 04:15:15 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.678 00:04:01.678 00:04:01.678 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.678 http://cunit.sourceforge.net/ 00:04:01.678 00:04:01.678 00:04:01.678 Suite: pci 00:04:01.678 Test: pci_hook ...[2024-11-05 04:15:15.239983] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2751087 has claimed it 00:04:01.678 EAL: Cannot find device (10000:00:01.0) 00:04:01.678 EAL: Failed to attach device on primary process 00:04:01.678 passed 00:04:01.678 00:04:01.678 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.678 suites 1 1 n/a 0 0 00:04:01.678 tests 1 1 1 0 0 00:04:01.678 asserts 25 25 25 0 n/a 00:04:01.678 00:04:01.678 Elapsed time = 0.030 seconds 00:04:01.678 00:04:01.678 real 0m0.052s 00:04:01.678 user 0m0.014s 00:04:01.678 sys 0m0.037s 00:04:01.678 04:15:15 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:01.678 04:15:15 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:01.678 ************************************ 00:04:01.678 END TEST env_pci 00:04:01.678 ************************************ 00:04:01.678 04:15:15 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.678 04:15:15 env -- env/env.sh@15 -- # uname 00:04:01.939 04:15:15 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.939 04:15:15 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.939 04:15:15 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.939 04:15:15 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:01.939 04:15:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:01.939 04:15:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.939 ************************************ 00:04:01.939 START TEST env_dpdk_post_init 00:04:01.939 ************************************ 00:04:01.939 04:15:15 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.939 EAL: Detected CPU lcores: 128 00:04:01.939 EAL: Detected NUMA nodes: 2 00:04:01.939 EAL: Detected shared linkage of DPDK 00:04:01.939 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.939 EAL: Selected IOVA mode 'VA' 00:04:01.939 EAL: VFIO support initialized 00:04:01.939 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:01.939 EAL: Using IOMMU type 1 (Type 1) 00:04:02.199 EAL: Ignore mapping IO port bar(1) 00:04:02.199 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:04:02.199 EAL: Ignore mapping IO port bar(1) 00:04:02.459 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:04:02.459 EAL: Ignore mapping IO port bar(1) 00:04:02.720 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:04:02.720 EAL: Ignore mapping IO port bar(1) 00:04:02.980 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:04:02.980 EAL: Ignore mapping IO port bar(1) 00:04:02.980 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:04:03.240 EAL: Ignore mapping IO port bar(1) 00:04:03.240 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:04:03.500 EAL: Ignore mapping IO port bar(1) 00:04:03.500 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:04:03.759 EAL: Ignore mapping IO port bar(1) 00:04:03.759 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:04:04.019 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:04:04.019 EAL: Ignore mapping IO port bar(1) 00:04:04.280 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:04:04.280 EAL: Ignore mapping IO port bar(1) 00:04:04.540 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:04:04.540 EAL: Ignore mapping IO port bar(1) 00:04:04.540 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:04:04.800 EAL: Ignore mapping IO port bar(1) 00:04:04.800 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:04:05.060 EAL: Ignore mapping IO port bar(1) 00:04:05.060 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:04:05.320 EAL: Ignore mapping IO port bar(1) 00:04:05.320 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:04:05.581 EAL: Ignore mapping IO port bar(1) 00:04:05.581 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:04:05.581 EAL: Ignore mapping IO port bar(1) 00:04:05.841 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:04:05.841 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:04:05.841 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:04:05.841 Starting DPDK initialization... 00:04:05.841 Starting SPDK post initialization... 00:04:05.841 SPDK NVMe probe 00:04:05.841 Attaching to 0000:65:00.0 00:04:05.841 Attached to 0000:65:00.0 00:04:05.841 Cleaning up... 00:04:07.754 00:04:07.754 real 0m5.731s 00:04:07.754 user 0m0.102s 00:04:07.754 sys 0m0.170s 00:04:07.754 04:15:21 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.754 04:15:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:07.754 ************************************ 00:04:07.754 END TEST env_dpdk_post_init 00:04:07.754 ************************************ 00:04:07.754 04:15:21 env -- env/env.sh@26 -- # uname 00:04:07.754 04:15:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:07.754 04:15:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.754 04:15:21 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.754 04:15:21 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.754 04:15:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.754 ************************************ 00:04:07.754 START TEST env_mem_callbacks 00:04:07.754 ************************************ 00:04:07.754 04:15:21 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:07.754 EAL: Detected CPU lcores: 128 00:04:07.754 EAL: Detected NUMA nodes: 2 00:04:07.754 EAL: Detected shared linkage of DPDK 00:04:07.754 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:07.754 EAL: Selected IOVA mode 'VA' 00:04:07.754 EAL: VFIO support initialized 00:04:07.754 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:07.754 00:04:07.754 00:04:07.754 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.754 http://cunit.sourceforge.net/ 00:04:07.754 00:04:07.754 00:04:07.754 Suite: memory 00:04:07.754 Test: test ... 00:04:07.754 register 0x200000200000 2097152 00:04:07.754 malloc 3145728 00:04:07.754 register 0x200000400000 4194304 00:04:07.754 buf 0x200000500000 len 3145728 PASSED 00:04:07.754 malloc 64 00:04:07.754 buf 0x2000004fff40 len 64 PASSED 00:04:07.754 malloc 4194304 00:04:07.754 register 0x200000800000 6291456 00:04:07.754 buf 0x200000a00000 len 4194304 PASSED 00:04:07.754 free 0x200000500000 3145728 00:04:07.754 free 0x2000004fff40 64 00:04:07.754 unregister 0x200000400000 4194304 PASSED 00:04:07.754 free 0x200000a00000 4194304 00:04:07.754 unregister 0x200000800000 6291456 PASSED 00:04:07.754 malloc 8388608 00:04:07.754 register 0x200000400000 10485760 00:04:07.754 buf 0x200000600000 len 8388608 PASSED 00:04:07.754 free 0x200000600000 8388608 00:04:07.754 unregister 0x200000400000 10485760 PASSED 00:04:07.754 passed 00:04:07.754 00:04:07.754 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.754 suites 1 1 n/a 0 0 00:04:07.754 tests 1 1 1 0 0 00:04:07.754 asserts 15 15 15 0 n/a 00:04:07.754 00:04:07.754 Elapsed time = 0.006 seconds 00:04:07.754 00:04:07.754 real 0m0.047s 00:04:07.754 user 0m0.012s 00:04:07.754 sys 0m0.034s 00:04:07.754 04:15:21 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.754 04:15:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:07.754 ************************************ 00:04:07.754 END TEST env_mem_callbacks 00:04:07.754 ************************************ 00:04:07.754 00:04:07.754 real 0m7.433s 00:04:07.754 user 0m0.990s 00:04:07.754 sys 0m0.984s 00:04:07.754 04:15:21 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:07.754 04:15:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:07.754 ************************************ 00:04:07.754 END TEST env 00:04:07.754 ************************************ 00:04:07.754 04:15:21 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:07.754 04:15:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:07.754 04:15:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:07.754 04:15:21 -- common/autotest_common.sh@10 -- # set +x 00:04:07.754 ************************************ 00:04:07.755 START TEST rpc 00:04:07.755 ************************************ 00:04:07.755 04:15:21 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:08.016 * Looking for test storage... 00:04:08.016 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:08.016 04:15:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.016 04:15:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.016 04:15:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.016 04:15:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.016 04:15:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.016 04:15:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.016 04:15:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.016 04:15:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.016 04:15:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.016 04:15:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.016 04:15:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.016 04:15:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:08.016 04:15:21 rpc -- scripts/common.sh@345 -- # : 1 00:04:08.016 04:15:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.016 04:15:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.016 04:15:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:08.016 04:15:21 rpc -- scripts/common.sh@353 -- # local d=1 00:04:08.016 04:15:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.016 04:15:21 rpc -- scripts/common.sh@355 -- # echo 1 00:04:08.016 04:15:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.016 04:15:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:08.016 04:15:21 rpc -- scripts/common.sh@353 -- # local d=2 00:04:08.016 04:15:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.016 04:15:21 rpc -- scripts/common.sh@355 -- # echo 2 00:04:08.016 04:15:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.016 04:15:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.016 04:15:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.016 04:15:21 rpc -- scripts/common.sh@368 -- # return 0 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:08.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.016 --rc genhtml_branch_coverage=1 00:04:08.016 --rc genhtml_function_coverage=1 00:04:08.016 --rc genhtml_legend=1 00:04:08.016 --rc geninfo_all_blocks=1 00:04:08.016 --rc geninfo_unexecuted_blocks=1 00:04:08.016 00:04:08.016 ' 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:08.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.016 --rc genhtml_branch_coverage=1 00:04:08.016 --rc genhtml_function_coverage=1 00:04:08.016 --rc genhtml_legend=1 00:04:08.016 --rc geninfo_all_blocks=1 00:04:08.016 --rc geninfo_unexecuted_blocks=1 00:04:08.016 00:04:08.016 ' 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:08.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.016 --rc genhtml_branch_coverage=1 00:04:08.016 --rc genhtml_function_coverage=1 00:04:08.016 --rc genhtml_legend=1 00:04:08.016 --rc geninfo_all_blocks=1 00:04:08.016 --rc geninfo_unexecuted_blocks=1 00:04:08.016 00:04:08.016 ' 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:08.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.016 --rc genhtml_branch_coverage=1 00:04:08.016 --rc genhtml_function_coverage=1 00:04:08.016 --rc genhtml_legend=1 00:04:08.016 --rc geninfo_all_blocks=1 00:04:08.016 --rc geninfo_unexecuted_blocks=1 00:04:08.016 00:04:08.016 ' 00:04:08.016 04:15:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2752543 00:04:08.016 04:15:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.016 04:15:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:08.016 04:15:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2752543 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@833 -- # '[' -z 2752543 ']' 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:08.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:08.016 04:15:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.016 [2024-11-05 04:15:21.622914] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:08.016 [2024-11-05 04:15:21.622987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752543 ] 00:04:08.276 [2024-11-05 04:15:21.698608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.277 [2024-11-05 04:15:21.740269] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:08.277 [2024-11-05 04:15:21.740302] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2752543' to capture a snapshot of events at runtime. 00:04:08.277 [2024-11-05 04:15:21.740310] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:08.277 [2024-11-05 04:15:21.740318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:08.277 [2024-11-05 04:15:21.740324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2752543 for offline analysis/debug. 00:04:08.277 [2024-11-05 04:15:21.740958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.847 04:15:22 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:08.847 04:15:22 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:08.847 04:15:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.847 04:15:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.847 04:15:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:08.847 04:15:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:08.847 04:15:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:08.847 04:15:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:08.847 04:15:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.847 ************************************ 00:04:08.847 START TEST rpc_integrity 00:04:08.847 ************************************ 00:04:08.847 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:08.847 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:08.847 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:08.847 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:08.847 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:08.847 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:08.847 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.108 { 00:04:09.108 "name": "Malloc0", 00:04:09.108 "aliases": [ 00:04:09.108 "a7a3c60b-0e53-482a-9fa1-3a8d06bd0179" 00:04:09.108 ], 00:04:09.108 "product_name": "Malloc disk", 00:04:09.108 "block_size": 512, 00:04:09.108 "num_blocks": 16384, 00:04:09.108 "uuid": "a7a3c60b-0e53-482a-9fa1-3a8d06bd0179", 00:04:09.108 "assigned_rate_limits": { 00:04:09.108 "rw_ios_per_sec": 0, 00:04:09.108 "rw_mbytes_per_sec": 0, 00:04:09.108 "r_mbytes_per_sec": 0, 00:04:09.108 "w_mbytes_per_sec": 0 00:04:09.108 }, 00:04:09.108 "claimed": false, 00:04:09.108 "zoned": false, 00:04:09.108 "supported_io_types": { 00:04:09.108 "read": true, 00:04:09.108 "write": true, 00:04:09.108 "unmap": true, 00:04:09.108 "flush": true, 00:04:09.108 "reset": true, 00:04:09.108 "nvme_admin": false, 00:04:09.108 "nvme_io": false, 00:04:09.108 "nvme_io_md": false, 00:04:09.108 "write_zeroes": true, 00:04:09.108 "zcopy": true, 00:04:09.108 "get_zone_info": false, 00:04:09.108 "zone_management": false, 00:04:09.108 "zone_append": false, 00:04:09.108 "compare": false, 00:04:09.108 "compare_and_write": false, 00:04:09.108 "abort": true, 00:04:09.108 "seek_hole": false, 00:04:09.108 "seek_data": false, 00:04:09.108 "copy": true, 00:04:09.108 "nvme_iov_md": false 00:04:09.108 }, 00:04:09.108 "memory_domains": [ 00:04:09.108 { 00:04:09.108 "dma_device_id": "system", 00:04:09.108 "dma_device_type": 1 00:04:09.108 }, 00:04:09.108 { 00:04:09.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.108 "dma_device_type": 2 00:04:09.108 } 00:04:09.108 ], 00:04:09.108 "driver_specific": {} 00:04:09.108 } 00:04:09.108 ]' 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.108 [2024-11-05 04:15:22.573703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:09.108 [2024-11-05 04:15:22.573735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.108 [2024-11-05 04:15:22.573752] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bb1da0 00:04:09.108 [2024-11-05 04:15:22.573760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.108 [2024-11-05 04:15:22.575113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.108 [2024-11-05 04:15:22.575135] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.108 Passthru0 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.108 { 00:04:09.108 "name": "Malloc0", 00:04:09.108 "aliases": [ 00:04:09.108 "a7a3c60b-0e53-482a-9fa1-3a8d06bd0179" 00:04:09.108 ], 00:04:09.108 "product_name": "Malloc disk", 00:04:09.108 "block_size": 512, 00:04:09.108 "num_blocks": 16384, 00:04:09.108 "uuid": "a7a3c60b-0e53-482a-9fa1-3a8d06bd0179", 00:04:09.108 "assigned_rate_limits": { 00:04:09.108 "rw_ios_per_sec": 0, 00:04:09.108 "rw_mbytes_per_sec": 0, 00:04:09.108 "r_mbytes_per_sec": 0, 00:04:09.108 "w_mbytes_per_sec": 0 00:04:09.108 }, 00:04:09.108 "claimed": true, 00:04:09.108 "claim_type": "exclusive_write", 00:04:09.108 "zoned": false, 00:04:09.108 "supported_io_types": { 00:04:09.108 "read": true, 00:04:09.108 "write": true, 00:04:09.108 "unmap": true, 00:04:09.108 "flush": true, 00:04:09.108 "reset": true, 00:04:09.108 "nvme_admin": false, 00:04:09.108 "nvme_io": false, 00:04:09.108 "nvme_io_md": false, 00:04:09.108 "write_zeroes": true, 00:04:09.108 "zcopy": true, 00:04:09.108 "get_zone_info": false, 00:04:09.108 "zone_management": false, 00:04:09.108 "zone_append": false, 00:04:09.108 "compare": false, 00:04:09.108 "compare_and_write": false, 00:04:09.108 "abort": true, 00:04:09.108 "seek_hole": false, 00:04:09.108 "seek_data": false, 00:04:09.108 "copy": true, 00:04:09.108 "nvme_iov_md": false 00:04:09.108 }, 00:04:09.108 "memory_domains": [ 00:04:09.108 { 00:04:09.108 "dma_device_id": "system", 00:04:09.108 "dma_device_type": 1 00:04:09.108 }, 00:04:09.108 { 00:04:09.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.108 "dma_device_type": 2 00:04:09.108 } 00:04:09.108 ], 00:04:09.108 "driver_specific": {} 00:04:09.108 }, 00:04:09.108 { 00:04:09.108 "name": "Passthru0", 00:04:09.108 "aliases": [ 00:04:09.108 "2b1eedca-3a20-5c5b-8a7e-436ec2ef63ab" 00:04:09.108 ], 00:04:09.108 "product_name": "passthru", 00:04:09.108 "block_size": 512, 00:04:09.108 "num_blocks": 16384, 00:04:09.108 "uuid": "2b1eedca-3a20-5c5b-8a7e-436ec2ef63ab", 00:04:09.108 "assigned_rate_limits": { 00:04:09.108 "rw_ios_per_sec": 0, 00:04:09.108 "rw_mbytes_per_sec": 0, 00:04:09.108 "r_mbytes_per_sec": 0, 00:04:09.108 "w_mbytes_per_sec": 0 00:04:09.108 }, 00:04:09.108 "claimed": false, 00:04:09.108 "zoned": false, 00:04:09.108 "supported_io_types": { 00:04:09.108 "read": true, 00:04:09.108 "write": true, 00:04:09.108 "unmap": true, 00:04:09.108 "flush": true, 00:04:09.108 "reset": true, 00:04:09.108 "nvme_admin": false, 00:04:09.108 "nvme_io": false, 00:04:09.108 "nvme_io_md": false, 00:04:09.108 "write_zeroes": true, 00:04:09.108 "zcopy": true, 00:04:09.108 "get_zone_info": false, 00:04:09.108 "zone_management": false, 00:04:09.108 "zone_append": false, 00:04:09.108 "compare": false, 00:04:09.108 "compare_and_write": false, 00:04:09.108 "abort": true, 00:04:09.108 "seek_hole": false, 00:04:09.108 "seek_data": false, 00:04:09.108 "copy": true, 00:04:09.108 "nvme_iov_md": false 00:04:09.108 }, 00:04:09.108 "memory_domains": [ 00:04:09.108 { 00:04:09.108 "dma_device_id": "system", 00:04:09.108 "dma_device_type": 1 00:04:09.108 }, 00:04:09.108 { 00:04:09.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.108 "dma_device_type": 2 00:04:09.108 } 00:04:09.108 ], 00:04:09.108 "driver_specific": { 00:04:09.108 "passthru": { 00:04:09.108 "name": "Passthru0", 00:04:09.108 "base_bdev_name": "Malloc0" 00:04:09.108 } 00:04:09.108 } 00:04:09.108 } 00:04:09.108 ]' 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.108 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.108 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.109 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:09.109 04:15:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:09.109 00:04:09.109 real 0m0.284s 00:04:09.109 user 0m0.181s 00:04:09.109 sys 0m0.041s 00:04:09.109 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.109 04:15:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.109 ************************************ 00:04:09.109 END TEST rpc_integrity 00:04:09.109 ************************************ 00:04:09.408 04:15:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:09.408 04:15:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.408 04:15:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.408 04:15:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.408 ************************************ 00:04:09.408 START TEST rpc_plugins 00:04:09.408 ************************************ 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:09.409 { 00:04:09.409 "name": "Malloc1", 00:04:09.409 "aliases": [ 00:04:09.409 "c025fb5e-d4c0-421d-920d-c4e895199dc2" 00:04:09.409 ], 00:04:09.409 "product_name": "Malloc disk", 00:04:09.409 "block_size": 4096, 00:04:09.409 "num_blocks": 256, 00:04:09.409 "uuid": "c025fb5e-d4c0-421d-920d-c4e895199dc2", 00:04:09.409 "assigned_rate_limits": { 00:04:09.409 "rw_ios_per_sec": 0, 00:04:09.409 "rw_mbytes_per_sec": 0, 00:04:09.409 "r_mbytes_per_sec": 0, 00:04:09.409 "w_mbytes_per_sec": 0 00:04:09.409 }, 00:04:09.409 "claimed": false, 00:04:09.409 "zoned": false, 00:04:09.409 "supported_io_types": { 00:04:09.409 "read": true, 00:04:09.409 "write": true, 00:04:09.409 "unmap": true, 00:04:09.409 "flush": true, 00:04:09.409 "reset": true, 00:04:09.409 "nvme_admin": false, 00:04:09.409 "nvme_io": false, 00:04:09.409 "nvme_io_md": false, 00:04:09.409 "write_zeroes": true, 00:04:09.409 "zcopy": true, 00:04:09.409 "get_zone_info": false, 00:04:09.409 "zone_management": false, 00:04:09.409 "zone_append": false, 00:04:09.409 "compare": false, 00:04:09.409 "compare_and_write": false, 00:04:09.409 "abort": true, 00:04:09.409 "seek_hole": false, 00:04:09.409 "seek_data": false, 00:04:09.409 "copy": true, 00:04:09.409 "nvme_iov_md": false 00:04:09.409 }, 00:04:09.409 "memory_domains": [ 00:04:09.409 { 00:04:09.409 "dma_device_id": "system", 00:04:09.409 "dma_device_type": 1 00:04:09.409 }, 00:04:09.409 { 00:04:09.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.409 "dma_device_type": 2 00:04:09.409 } 00:04:09.409 ], 00:04:09.409 "driver_specific": {} 00:04:09.409 } 00:04:09.409 ]' 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:09.409 04:15:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:09.409 00:04:09.409 real 0m0.150s 00:04:09.409 user 0m0.089s 00:04:09.409 sys 0m0.023s 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.409 04:15:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:09.409 ************************************ 00:04:09.409 END TEST rpc_plugins 00:04:09.409 ************************************ 00:04:09.409 04:15:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:09.409 04:15:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.409 04:15:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.409 04:15:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.723 ************************************ 00:04:09.723 START TEST rpc_trace_cmd_test 00:04:09.723 ************************************ 00:04:09.723 04:15:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:09.723 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:09.723 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:09.723 04:15:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.723 04:15:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.723 04:15:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.723 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:09.723 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2752543", 00:04:09.723 "tpoint_group_mask": "0x8", 00:04:09.723 "iscsi_conn": { 00:04:09.723 "mask": "0x2", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "scsi": { 00:04:09.723 "mask": "0x4", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "bdev": { 00:04:09.723 "mask": "0x8", 00:04:09.723 "tpoint_mask": "0xffffffffffffffff" 00:04:09.723 }, 00:04:09.723 "nvmf_rdma": { 00:04:09.723 "mask": "0x10", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "nvmf_tcp": { 00:04:09.723 "mask": "0x20", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "ftl": { 00:04:09.723 "mask": "0x40", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "blobfs": { 00:04:09.723 "mask": "0x80", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "dsa": { 00:04:09.723 "mask": "0x200", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "thread": { 00:04:09.723 "mask": "0x400", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "nvme_pcie": { 00:04:09.723 "mask": "0x800", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "iaa": { 00:04:09.723 "mask": "0x1000", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "nvme_tcp": { 00:04:09.723 "mask": "0x2000", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "bdev_nvme": { 00:04:09.723 "mask": "0x4000", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "sock": { 00:04:09.723 "mask": "0x8000", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "blob": { 00:04:09.723 "mask": "0x10000", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.723 "bdev_raid": { 00:04:09.723 "mask": "0x20000", 00:04:09.723 "tpoint_mask": "0x0" 00:04:09.723 }, 00:04:09.724 "scheduler": { 00:04:09.724 "mask": "0x40000", 00:04:09.724 "tpoint_mask": "0x0" 00:04:09.724 } 00:04:09.724 }' 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:09.724 00:04:09.724 real 0m0.233s 00:04:09.724 user 0m0.193s 00:04:09.724 sys 0m0.030s 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:09.724 04:15:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:09.724 ************************************ 00:04:09.724 END TEST rpc_trace_cmd_test 00:04:09.724 ************************************ 00:04:09.724 04:15:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:09.724 04:15:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:09.724 04:15:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:09.724 04:15:23 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:09.724 04:15:23 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:09.724 04:15:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:09.724 ************************************ 00:04:09.724 START TEST rpc_daemon_integrity 00:04:09.724 ************************************ 00:04:09.724 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:09.724 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:09.724 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.724 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.724 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.724 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:09.724 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:09.985 { 00:04:09.985 "name": "Malloc2", 00:04:09.985 "aliases": [ 00:04:09.985 "d24673cf-09c3-43fa-8fa3-2511de293d7a" 00:04:09.985 ], 00:04:09.985 "product_name": "Malloc disk", 00:04:09.985 "block_size": 512, 00:04:09.985 "num_blocks": 16384, 00:04:09.985 "uuid": "d24673cf-09c3-43fa-8fa3-2511de293d7a", 00:04:09.985 "assigned_rate_limits": { 00:04:09.985 "rw_ios_per_sec": 0, 00:04:09.985 "rw_mbytes_per_sec": 0, 00:04:09.985 "r_mbytes_per_sec": 0, 00:04:09.985 "w_mbytes_per_sec": 0 00:04:09.985 }, 00:04:09.985 "claimed": false, 00:04:09.985 "zoned": false, 00:04:09.985 "supported_io_types": { 00:04:09.985 "read": true, 00:04:09.985 "write": true, 00:04:09.985 "unmap": true, 00:04:09.985 "flush": true, 00:04:09.985 "reset": true, 00:04:09.985 "nvme_admin": false, 00:04:09.985 "nvme_io": false, 00:04:09.985 "nvme_io_md": false, 00:04:09.985 "write_zeroes": true, 00:04:09.985 "zcopy": true, 00:04:09.985 "get_zone_info": false, 00:04:09.985 "zone_management": false, 00:04:09.985 "zone_append": false, 00:04:09.985 "compare": false, 00:04:09.985 "compare_and_write": false, 00:04:09.985 "abort": true, 00:04:09.985 "seek_hole": false, 00:04:09.985 "seek_data": false, 00:04:09.985 "copy": true, 00:04:09.985 "nvme_iov_md": false 00:04:09.985 }, 00:04:09.985 "memory_domains": [ 00:04:09.985 { 00:04:09.985 "dma_device_id": "system", 00:04:09.985 "dma_device_type": 1 00:04:09.985 }, 00:04:09.985 { 00:04:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.985 "dma_device_type": 2 00:04:09.985 } 00:04:09.985 ], 00:04:09.985 "driver_specific": {} 00:04:09.985 } 00:04:09.985 ]' 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.985 [2024-11-05 04:15:23.484164] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:09.985 [2024-11-05 04:15:23.484192] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:09.985 [2024-11-05 04:15:23.484204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ce3090 00:04:09.985 [2024-11-05 04:15:23.484211] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:09.985 [2024-11-05 04:15:23.485521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:09.985 [2024-11-05 04:15:23.485542] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:09.985 Passthru0 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:09.985 { 00:04:09.985 "name": "Malloc2", 00:04:09.985 "aliases": [ 00:04:09.985 "d24673cf-09c3-43fa-8fa3-2511de293d7a" 00:04:09.985 ], 00:04:09.985 "product_name": "Malloc disk", 00:04:09.985 "block_size": 512, 00:04:09.985 "num_blocks": 16384, 00:04:09.985 "uuid": "d24673cf-09c3-43fa-8fa3-2511de293d7a", 00:04:09.985 "assigned_rate_limits": { 00:04:09.985 "rw_ios_per_sec": 0, 00:04:09.985 "rw_mbytes_per_sec": 0, 00:04:09.985 "r_mbytes_per_sec": 0, 00:04:09.985 "w_mbytes_per_sec": 0 00:04:09.985 }, 00:04:09.985 "claimed": true, 00:04:09.985 "claim_type": "exclusive_write", 00:04:09.985 "zoned": false, 00:04:09.985 "supported_io_types": { 00:04:09.985 "read": true, 00:04:09.985 "write": true, 00:04:09.985 "unmap": true, 00:04:09.985 "flush": true, 00:04:09.985 "reset": true, 00:04:09.985 "nvme_admin": false, 00:04:09.985 "nvme_io": false, 00:04:09.985 "nvme_io_md": false, 00:04:09.985 "write_zeroes": true, 00:04:09.985 "zcopy": true, 00:04:09.985 "get_zone_info": false, 00:04:09.985 "zone_management": false, 00:04:09.985 "zone_append": false, 00:04:09.985 "compare": false, 00:04:09.985 "compare_and_write": false, 00:04:09.985 "abort": true, 00:04:09.985 "seek_hole": false, 00:04:09.985 "seek_data": false, 00:04:09.985 "copy": true, 00:04:09.985 "nvme_iov_md": false 00:04:09.985 }, 00:04:09.985 "memory_domains": [ 00:04:09.985 { 00:04:09.985 "dma_device_id": "system", 00:04:09.985 "dma_device_type": 1 00:04:09.985 }, 00:04:09.985 { 00:04:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.985 "dma_device_type": 2 00:04:09.985 } 00:04:09.985 ], 00:04:09.985 "driver_specific": {} 00:04:09.985 }, 00:04:09.985 { 00:04:09.985 "name": "Passthru0", 00:04:09.985 "aliases": [ 00:04:09.985 "4998c98e-adbf-5627-924e-4995bb6b3450" 00:04:09.985 ], 00:04:09.985 "product_name": "passthru", 00:04:09.985 "block_size": 512, 00:04:09.985 "num_blocks": 16384, 00:04:09.985 "uuid": "4998c98e-adbf-5627-924e-4995bb6b3450", 00:04:09.985 "assigned_rate_limits": { 00:04:09.985 "rw_ios_per_sec": 0, 00:04:09.985 "rw_mbytes_per_sec": 0, 00:04:09.985 "r_mbytes_per_sec": 0, 00:04:09.985 "w_mbytes_per_sec": 0 00:04:09.985 }, 00:04:09.985 "claimed": false, 00:04:09.985 "zoned": false, 00:04:09.985 "supported_io_types": { 00:04:09.985 "read": true, 00:04:09.985 "write": true, 00:04:09.985 "unmap": true, 00:04:09.985 "flush": true, 00:04:09.985 "reset": true, 00:04:09.985 "nvme_admin": false, 00:04:09.985 "nvme_io": false, 00:04:09.985 "nvme_io_md": false, 00:04:09.985 "write_zeroes": true, 00:04:09.985 "zcopy": true, 00:04:09.985 "get_zone_info": false, 00:04:09.985 "zone_management": false, 00:04:09.985 "zone_append": false, 00:04:09.985 "compare": false, 00:04:09.985 "compare_and_write": false, 00:04:09.985 "abort": true, 00:04:09.985 "seek_hole": false, 00:04:09.985 "seek_data": false, 00:04:09.985 "copy": true, 00:04:09.985 "nvme_iov_md": false 00:04:09.985 }, 00:04:09.985 "memory_domains": [ 00:04:09.985 { 00:04:09.985 "dma_device_id": "system", 00:04:09.985 "dma_device_type": 1 00:04:09.985 }, 00:04:09.985 { 00:04:09.985 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:09.985 "dma_device_type": 2 00:04:09.985 } 00:04:09.985 ], 00:04:09.985 "driver_specific": { 00:04:09.985 "passthru": { 00:04:09.985 "name": "Passthru0", 00:04:09.985 "base_bdev_name": "Malloc2" 00:04:09.985 } 00:04:09.985 } 00:04:09.985 } 00:04:09.985 ]' 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:09.985 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:10.247 04:15:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:10.247 00:04:10.247 real 0m0.302s 00:04:10.247 user 0m0.194s 00:04:10.247 sys 0m0.040s 00:04:10.247 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.247 04:15:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:10.247 ************************************ 00:04:10.247 END TEST rpc_daemon_integrity 00:04:10.247 ************************************ 00:04:10.247 04:15:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:10.247 04:15:23 rpc -- rpc/rpc.sh@84 -- # killprocess 2752543 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@952 -- # '[' -z 2752543 ']' 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@956 -- # kill -0 2752543 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@957 -- # uname 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2752543 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2752543' 00:04:10.247 killing process with pid 2752543 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@971 -- # kill 2752543 00:04:10.247 04:15:23 rpc -- common/autotest_common.sh@976 -- # wait 2752543 00:04:10.507 00:04:10.507 real 0m2.591s 00:04:10.507 user 0m3.327s 00:04:10.507 sys 0m0.774s 00:04:10.507 04:15:23 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:10.507 04:15:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.507 ************************************ 00:04:10.507 END TEST rpc 00:04:10.507 ************************************ 00:04:10.507 04:15:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.507 04:15:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.507 04:15:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.507 04:15:23 -- common/autotest_common.sh@10 -- # set +x 00:04:10.507 ************************************ 00:04:10.507 START TEST skip_rpc 00:04:10.507 ************************************ 00:04:10.507 04:15:24 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:10.507 * Looking for test storage... 00:04:10.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:10.507 04:15:24 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:10.507 04:15:24 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:10.507 04:15:24 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:10.769 04:15:24 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:10.769 04:15:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:10.769 04:15:24 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:10.769 04:15:24 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:10.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.769 --rc genhtml_branch_coverage=1 00:04:10.769 --rc genhtml_function_coverage=1 00:04:10.769 --rc genhtml_legend=1 00:04:10.769 --rc geninfo_all_blocks=1 00:04:10.769 --rc geninfo_unexecuted_blocks=1 00:04:10.769 00:04:10.769 ' 00:04:10.769 04:15:24 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:10.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.769 --rc genhtml_branch_coverage=1 00:04:10.769 --rc genhtml_function_coverage=1 00:04:10.769 --rc genhtml_legend=1 00:04:10.769 --rc geninfo_all_blocks=1 00:04:10.769 --rc geninfo_unexecuted_blocks=1 00:04:10.769 00:04:10.769 ' 00:04:10.769 04:15:24 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:10.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.769 --rc genhtml_branch_coverage=1 00:04:10.769 --rc genhtml_function_coverage=1 00:04:10.769 --rc genhtml_legend=1 00:04:10.769 --rc geninfo_all_blocks=1 00:04:10.769 --rc geninfo_unexecuted_blocks=1 00:04:10.769 00:04:10.769 ' 00:04:10.769 04:15:24 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:10.769 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:10.769 --rc genhtml_branch_coverage=1 00:04:10.769 --rc genhtml_function_coverage=1 00:04:10.769 --rc genhtml_legend=1 00:04:10.769 --rc geninfo_all_blocks=1 00:04:10.769 --rc geninfo_unexecuted_blocks=1 00:04:10.769 00:04:10.769 ' 00:04:10.769 04:15:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:10.769 04:15:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:10.769 04:15:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:10.769 04:15:24 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:10.769 04:15:24 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:10.769 04:15:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.769 ************************************ 00:04:10.769 START TEST skip_rpc 00:04:10.769 ************************************ 00:04:10.769 04:15:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:10.769 04:15:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2753091 00:04:10.769 04:15:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.769 04:15:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:10.769 04:15:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:10.769 [2024-11-05 04:15:24.307858] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:10.769 [2024-11-05 04:15:24.307926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753091 ] 00:04:10.769 [2024-11-05 04:15:24.383230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.030 [2024-11-05 04:15:24.425827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2753091 00:04:16.315 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 2753091 ']' 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 2753091 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2753091 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2753091' 00:04:16.316 killing process with pid 2753091 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 2753091 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 2753091 00:04:16.316 00:04:16.316 real 0m5.282s 00:04:16.316 user 0m5.091s 00:04:16.316 sys 0m0.243s 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:16.316 04:15:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.316 ************************************ 00:04:16.316 END TEST skip_rpc 00:04:16.316 ************************************ 00:04:16.316 04:15:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:16.316 04:15:29 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:16.316 04:15:29 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:16.316 04:15:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.316 ************************************ 00:04:16.316 START TEST skip_rpc_with_json 00:04:16.316 ************************************ 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2754299 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2754299 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 2754299 ']' 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:16.316 04:15:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.316 [2024-11-05 04:15:29.672309] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:16.316 [2024-11-05 04:15:29.672368] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2754299 ] 00:04:16.316 [2024-11-05 04:15:29.749803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.316 [2024-11-05 04:15:29.791681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.887 [2024-11-05 04:15:30.480351] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:16.887 request: 00:04:16.887 { 00:04:16.887 "trtype": "tcp", 00:04:16.887 "method": "nvmf_get_transports", 00:04:16.887 "req_id": 1 00:04:16.887 } 00:04:16.887 Got JSON-RPC error response 00:04:16.887 response: 00:04:16.887 { 00:04:16.887 "code": -19, 00:04:16.887 "message": "No such device" 00:04:16.887 } 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:16.887 [2024-11-05 04:15:30.492471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:16.887 04:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:16.888 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:16.888 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:17.149 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:17.149 04:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.149 { 00:04:17.149 "subsystems": [ 00:04:17.149 { 00:04:17.149 "subsystem": "fsdev", 00:04:17.149 "config": [ 00:04:17.149 { 00:04:17.149 "method": "fsdev_set_opts", 00:04:17.149 "params": { 00:04:17.149 "fsdev_io_pool_size": 65535, 00:04:17.149 "fsdev_io_cache_size": 256 00:04:17.149 } 00:04:17.149 } 00:04:17.149 ] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "vfio_user_target", 00:04:17.149 "config": null 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "keyring", 00:04:17.149 "config": [] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "iobuf", 00:04:17.149 "config": [ 00:04:17.149 { 00:04:17.149 "method": "iobuf_set_options", 00:04:17.149 "params": { 00:04:17.149 "small_pool_count": 8192, 00:04:17.149 "large_pool_count": 1024, 00:04:17.149 "small_bufsize": 8192, 00:04:17.149 "large_bufsize": 135168, 00:04:17.149 "enable_numa": false 00:04:17.149 } 00:04:17.149 } 00:04:17.149 ] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "sock", 00:04:17.149 "config": [ 00:04:17.149 { 00:04:17.149 "method": "sock_set_default_impl", 00:04:17.149 "params": { 00:04:17.149 "impl_name": "posix" 00:04:17.149 } 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "method": "sock_impl_set_options", 00:04:17.149 "params": { 00:04:17.149 "impl_name": "ssl", 00:04:17.149 "recv_buf_size": 4096, 00:04:17.149 "send_buf_size": 4096, 00:04:17.149 "enable_recv_pipe": true, 00:04:17.149 "enable_quickack": false, 00:04:17.149 "enable_placement_id": 0, 00:04:17.149 "enable_zerocopy_send_server": true, 00:04:17.149 "enable_zerocopy_send_client": false, 00:04:17.149 "zerocopy_threshold": 0, 00:04:17.149 "tls_version": 0, 00:04:17.149 "enable_ktls": false 00:04:17.149 } 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "method": "sock_impl_set_options", 00:04:17.149 "params": { 00:04:17.149 "impl_name": "posix", 00:04:17.149 "recv_buf_size": 2097152, 00:04:17.149 "send_buf_size": 2097152, 00:04:17.149 "enable_recv_pipe": true, 00:04:17.149 "enable_quickack": false, 00:04:17.149 "enable_placement_id": 0, 00:04:17.149 "enable_zerocopy_send_server": true, 00:04:17.149 "enable_zerocopy_send_client": false, 00:04:17.149 "zerocopy_threshold": 0, 00:04:17.149 "tls_version": 0, 00:04:17.149 "enable_ktls": false 00:04:17.149 } 00:04:17.149 } 00:04:17.149 ] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "vmd", 00:04:17.149 "config": [] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "accel", 00:04:17.149 "config": [ 00:04:17.149 { 00:04:17.149 "method": "accel_set_options", 00:04:17.149 "params": { 00:04:17.149 "small_cache_size": 128, 00:04:17.149 "large_cache_size": 16, 00:04:17.149 "task_count": 2048, 00:04:17.149 "sequence_count": 2048, 00:04:17.149 "buf_count": 2048 00:04:17.149 } 00:04:17.149 } 00:04:17.149 ] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "bdev", 00:04:17.149 "config": [ 00:04:17.149 { 00:04:17.149 "method": "bdev_set_options", 00:04:17.149 "params": { 00:04:17.149 "bdev_io_pool_size": 65535, 00:04:17.149 "bdev_io_cache_size": 256, 00:04:17.149 "bdev_auto_examine": true, 00:04:17.149 "iobuf_small_cache_size": 128, 00:04:17.149 "iobuf_large_cache_size": 16 00:04:17.149 } 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "method": "bdev_raid_set_options", 00:04:17.149 "params": { 00:04:17.149 "process_window_size_kb": 1024, 00:04:17.149 "process_max_bandwidth_mb_sec": 0 00:04:17.149 } 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "method": "bdev_iscsi_set_options", 00:04:17.149 "params": { 00:04:17.149 "timeout_sec": 30 00:04:17.149 } 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "method": "bdev_nvme_set_options", 00:04:17.149 "params": { 00:04:17.149 "action_on_timeout": "none", 00:04:17.149 "timeout_us": 0, 00:04:17.149 "timeout_admin_us": 0, 00:04:17.149 "keep_alive_timeout_ms": 10000, 00:04:17.149 "arbitration_burst": 0, 00:04:17.149 "low_priority_weight": 0, 00:04:17.149 "medium_priority_weight": 0, 00:04:17.149 "high_priority_weight": 0, 00:04:17.149 "nvme_adminq_poll_period_us": 10000, 00:04:17.149 "nvme_ioq_poll_period_us": 0, 00:04:17.149 "io_queue_requests": 0, 00:04:17.149 "delay_cmd_submit": true, 00:04:17.149 "transport_retry_count": 4, 00:04:17.149 "bdev_retry_count": 3, 00:04:17.149 "transport_ack_timeout": 0, 00:04:17.149 "ctrlr_loss_timeout_sec": 0, 00:04:17.149 "reconnect_delay_sec": 0, 00:04:17.149 "fast_io_fail_timeout_sec": 0, 00:04:17.149 "disable_auto_failback": false, 00:04:17.149 "generate_uuids": false, 00:04:17.149 "transport_tos": 0, 00:04:17.149 "nvme_error_stat": false, 00:04:17.149 "rdma_srq_size": 0, 00:04:17.149 "io_path_stat": false, 00:04:17.149 "allow_accel_sequence": false, 00:04:17.149 "rdma_max_cq_size": 0, 00:04:17.149 "rdma_cm_event_timeout_ms": 0, 00:04:17.149 "dhchap_digests": [ 00:04:17.149 "sha256", 00:04:17.149 "sha384", 00:04:17.149 "sha512" 00:04:17.149 ], 00:04:17.149 "dhchap_dhgroups": [ 00:04:17.149 "null", 00:04:17.149 "ffdhe2048", 00:04:17.149 "ffdhe3072", 00:04:17.149 "ffdhe4096", 00:04:17.149 "ffdhe6144", 00:04:17.149 "ffdhe8192" 00:04:17.149 ] 00:04:17.149 } 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "method": "bdev_nvme_set_hotplug", 00:04:17.149 "params": { 00:04:17.149 "period_us": 100000, 00:04:17.149 "enable": false 00:04:17.149 } 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "method": "bdev_wait_for_examine" 00:04:17.149 } 00:04:17.149 ] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "scsi", 00:04:17.149 "config": null 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "scheduler", 00:04:17.149 "config": [ 00:04:17.149 { 00:04:17.149 "method": "framework_set_scheduler", 00:04:17.149 "params": { 00:04:17.149 "name": "static" 00:04:17.149 } 00:04:17.149 } 00:04:17.149 ] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "vhost_scsi", 00:04:17.149 "config": [] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "vhost_blk", 00:04:17.149 "config": [] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "ublk", 00:04:17.149 "config": [] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "nbd", 00:04:17.149 "config": [] 00:04:17.149 }, 00:04:17.149 { 00:04:17.149 "subsystem": "nvmf", 00:04:17.149 "config": [ 00:04:17.149 { 00:04:17.149 "method": "nvmf_set_config", 00:04:17.149 "params": { 00:04:17.149 "discovery_filter": "match_any", 00:04:17.149 "admin_cmd_passthru": { 00:04:17.149 "identify_ctrlr": false 00:04:17.149 }, 00:04:17.150 "dhchap_digests": [ 00:04:17.150 "sha256", 00:04:17.150 "sha384", 00:04:17.150 "sha512" 00:04:17.150 ], 00:04:17.150 "dhchap_dhgroups": [ 00:04:17.150 "null", 00:04:17.150 "ffdhe2048", 00:04:17.150 "ffdhe3072", 00:04:17.150 "ffdhe4096", 00:04:17.150 "ffdhe6144", 00:04:17.150 "ffdhe8192" 00:04:17.150 ] 00:04:17.150 } 00:04:17.150 }, 00:04:17.150 { 00:04:17.150 "method": "nvmf_set_max_subsystems", 00:04:17.150 "params": { 00:04:17.150 "max_subsystems": 1024 00:04:17.150 } 00:04:17.150 }, 00:04:17.150 { 00:04:17.150 "method": "nvmf_set_crdt", 00:04:17.150 "params": { 00:04:17.150 "crdt1": 0, 00:04:17.150 "crdt2": 0, 00:04:17.150 "crdt3": 0 00:04:17.150 } 00:04:17.150 }, 00:04:17.150 { 00:04:17.150 "method": "nvmf_create_transport", 00:04:17.150 "params": { 00:04:17.150 "trtype": "TCP", 00:04:17.150 "max_queue_depth": 128, 00:04:17.150 "max_io_qpairs_per_ctrlr": 127, 00:04:17.150 "in_capsule_data_size": 4096, 00:04:17.150 "max_io_size": 131072, 00:04:17.150 "io_unit_size": 131072, 00:04:17.150 "max_aq_depth": 128, 00:04:17.150 "num_shared_buffers": 511, 00:04:17.150 "buf_cache_size": 4294967295, 00:04:17.150 "dif_insert_or_strip": false, 00:04:17.150 "zcopy": false, 00:04:17.150 "c2h_success": true, 00:04:17.150 "sock_priority": 0, 00:04:17.150 "abort_timeout_sec": 1, 00:04:17.150 "ack_timeout": 0, 00:04:17.150 "data_wr_pool_size": 0 00:04:17.150 } 00:04:17.150 } 00:04:17.150 ] 00:04:17.150 }, 00:04:17.150 { 00:04:17.150 "subsystem": "iscsi", 00:04:17.150 "config": [ 00:04:17.150 { 00:04:17.150 "method": "iscsi_set_options", 00:04:17.150 "params": { 00:04:17.150 "node_base": "iqn.2016-06.io.spdk", 00:04:17.150 "max_sessions": 128, 00:04:17.150 "max_connections_per_session": 2, 00:04:17.150 "max_queue_depth": 64, 00:04:17.150 "default_time2wait": 2, 00:04:17.150 "default_time2retain": 20, 00:04:17.150 "first_burst_length": 8192, 00:04:17.150 "immediate_data": true, 00:04:17.150 "allow_duplicated_isid": false, 00:04:17.150 "error_recovery_level": 0, 00:04:17.150 "nop_timeout": 60, 00:04:17.150 "nop_in_interval": 30, 00:04:17.150 "disable_chap": false, 00:04:17.150 "require_chap": false, 00:04:17.150 "mutual_chap": false, 00:04:17.150 "chap_group": 0, 00:04:17.150 "max_large_datain_per_connection": 64, 00:04:17.150 "max_r2t_per_connection": 4, 00:04:17.150 "pdu_pool_size": 36864, 00:04:17.150 "immediate_data_pool_size": 16384, 00:04:17.150 "data_out_pool_size": 2048 00:04:17.150 } 00:04:17.150 } 00:04:17.150 ] 00:04:17.150 } 00:04:17.150 ] 00:04:17.150 } 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2754299 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2754299 ']' 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2754299 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2754299 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2754299' 00:04:17.150 killing process with pid 2754299 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2754299 00:04:17.150 04:15:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2754299 00:04:17.411 04:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2754474 00:04:17.411 04:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:17.411 04:15:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:22.702 04:15:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2754474 00:04:22.702 04:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 2754474 ']' 00:04:22.702 04:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 2754474 00:04:22.702 04:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:22.702 04:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:22.702 04:15:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2754474 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2754474' 00:04:22.702 killing process with pid 2754474 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 2754474 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 2754474 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:22.702 00:04:22.702 real 0m6.611s 00:04:22.702 user 0m6.505s 00:04:22.702 sys 0m0.581s 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.702 ************************************ 00:04:22.702 END TEST skip_rpc_with_json 00:04:22.702 ************************************ 00:04:22.702 04:15:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:22.702 04:15:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.702 04:15:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.702 04:15:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.702 ************************************ 00:04:22.702 START TEST skip_rpc_with_delay 00:04:22.702 ************************************ 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:22.702 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:22.964 [2024-11-05 04:15:36.371582] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:22.964 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:22.964 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:22.964 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:22.964 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:22.964 00:04:22.964 real 0m0.086s 00:04:22.964 user 0m0.058s 00:04:22.964 sys 0m0.027s 00:04:22.964 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:22.964 04:15:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:22.964 ************************************ 00:04:22.964 END TEST skip_rpc_with_delay 00:04:22.964 ************************************ 00:04:22.964 04:15:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:22.964 04:15:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:22.964 04:15:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:22.964 04:15:36 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:22.964 04:15:36 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:22.964 04:15:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.964 ************************************ 00:04:22.964 START TEST exit_on_failed_rpc_init 00:04:22.964 ************************************ 00:04:22.964 04:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:22.964 04:15:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2755823 00:04:22.964 04:15:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2755823 00:04:22.964 04:15:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.964 04:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 2755823 ']' 00:04:22.964 04:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.964 04:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:22.964 04:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.965 04:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:22.965 04:15:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.965 [2024-11-05 04:15:36.542136] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:22.965 [2024-11-05 04:15:36.542197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755823 ] 00:04:23.226 [2024-11-05 04:15:36.617170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.226 [2024-11-05 04:15:36.660321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:23.799 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:23.799 [2024-11-05 04:15:37.372343] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:23.799 [2024-11-05 04:15:37.372392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755857 ] 00:04:24.060 [2024-11-05 04:15:37.460429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.060 [2024-11-05 04:15:37.496148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.060 [2024-11-05 04:15:37.496200] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:24.060 [2024-11-05 04:15:37.496210] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:24.060 [2024-11-05 04:15:37.496217] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2755823 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 2755823 ']' 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 2755823 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2755823 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2755823' 00:04:24.060 killing process with pid 2755823 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 2755823 00:04:24.060 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 2755823 00:04:24.321 00:04:24.321 real 0m1.334s 00:04:24.321 user 0m1.556s 00:04:24.321 sys 0m0.383s 00:04:24.321 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.321 04:15:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:24.321 ************************************ 00:04:24.321 END TEST exit_on_failed_rpc_init 00:04:24.321 ************************************ 00:04:24.321 04:15:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:24.321 00:04:24.321 real 0m13.823s 00:04:24.321 user 0m13.417s 00:04:24.321 sys 0m1.566s 00:04:24.321 04:15:37 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.321 04:15:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.321 ************************************ 00:04:24.321 END TEST skip_rpc 00:04:24.321 ************************************ 00:04:24.321 04:15:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.321 04:15:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.321 04:15:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.321 04:15:37 -- common/autotest_common.sh@10 -- # set +x 00:04:24.321 ************************************ 00:04:24.321 START TEST rpc_client 00:04:24.321 ************************************ 00:04:24.321 04:15:37 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:24.583 * Looking for test storage... 00:04:24.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.583 04:15:38 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:24.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.583 --rc genhtml_branch_coverage=1 00:04:24.583 --rc genhtml_function_coverage=1 00:04:24.583 --rc genhtml_legend=1 00:04:24.583 --rc geninfo_all_blocks=1 00:04:24.583 --rc geninfo_unexecuted_blocks=1 00:04:24.583 00:04:24.583 ' 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:24.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.583 --rc genhtml_branch_coverage=1 00:04:24.583 --rc genhtml_function_coverage=1 00:04:24.583 --rc genhtml_legend=1 00:04:24.583 --rc geninfo_all_blocks=1 00:04:24.583 --rc geninfo_unexecuted_blocks=1 00:04:24.583 00:04:24.583 ' 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:24.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.583 --rc genhtml_branch_coverage=1 00:04:24.583 --rc genhtml_function_coverage=1 00:04:24.583 --rc genhtml_legend=1 00:04:24.583 --rc geninfo_all_blocks=1 00:04:24.583 --rc geninfo_unexecuted_blocks=1 00:04:24.583 00:04:24.583 ' 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:24.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.583 --rc genhtml_branch_coverage=1 00:04:24.583 --rc genhtml_function_coverage=1 00:04:24.583 --rc genhtml_legend=1 00:04:24.583 --rc geninfo_all_blocks=1 00:04:24.583 --rc geninfo_unexecuted_blocks=1 00:04:24.583 00:04:24.583 ' 00:04:24.583 04:15:38 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:24.583 OK 00:04:24.583 04:15:38 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:24.583 00:04:24.583 real 0m0.224s 00:04:24.583 user 0m0.129s 00:04:24.583 sys 0m0.108s 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:24.583 04:15:38 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:24.583 ************************************ 00:04:24.583 END TEST rpc_client 00:04:24.583 ************************************ 00:04:24.583 04:15:38 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.583 04:15:38 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:24.583 04:15:38 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:24.583 04:15:38 -- common/autotest_common.sh@10 -- # set +x 00:04:24.846 ************************************ 00:04:24.846 START TEST json_config 00:04:24.846 ************************************ 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:24.846 04:15:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:24.846 04:15:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:24.846 04:15:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:24.846 04:15:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:24.846 04:15:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:24.846 04:15:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:24.846 04:15:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:24.846 04:15:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:24.846 04:15:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:24.846 04:15:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:24.846 04:15:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:24.846 04:15:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:24.846 04:15:38 json_config -- scripts/common.sh@345 -- # : 1 00:04:24.846 04:15:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:24.846 04:15:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:24.846 04:15:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:24.846 04:15:38 json_config -- scripts/common.sh@353 -- # local d=1 00:04:24.846 04:15:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:24.846 04:15:38 json_config -- scripts/common.sh@355 -- # echo 1 00:04:24.846 04:15:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:24.846 04:15:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:24.846 04:15:38 json_config -- scripts/common.sh@353 -- # local d=2 00:04:24.846 04:15:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:24.846 04:15:38 json_config -- scripts/common.sh@355 -- # echo 2 00:04:24.846 04:15:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:24.846 04:15:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:24.846 04:15:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:24.846 04:15:38 json_config -- scripts/common.sh@368 -- # return 0 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:24.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.846 --rc genhtml_branch_coverage=1 00:04:24.846 --rc genhtml_function_coverage=1 00:04:24.846 --rc genhtml_legend=1 00:04:24.846 --rc geninfo_all_blocks=1 00:04:24.846 --rc geninfo_unexecuted_blocks=1 00:04:24.846 00:04:24.846 ' 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:24.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.846 --rc genhtml_branch_coverage=1 00:04:24.846 --rc genhtml_function_coverage=1 00:04:24.846 --rc genhtml_legend=1 00:04:24.846 --rc geninfo_all_blocks=1 00:04:24.846 --rc geninfo_unexecuted_blocks=1 00:04:24.846 00:04:24.846 ' 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:24.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.846 --rc genhtml_branch_coverage=1 00:04:24.846 --rc genhtml_function_coverage=1 00:04:24.846 --rc genhtml_legend=1 00:04:24.846 --rc geninfo_all_blocks=1 00:04:24.846 --rc geninfo_unexecuted_blocks=1 00:04:24.846 00:04:24.846 ' 00:04:24.846 04:15:38 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:24.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:24.846 --rc genhtml_branch_coverage=1 00:04:24.846 --rc genhtml_function_coverage=1 00:04:24.846 --rc genhtml_legend=1 00:04:24.846 --rc geninfo_all_blocks=1 00:04:24.846 --rc geninfo_unexecuted_blocks=1 00:04:24.846 00:04:24.846 ' 00:04:24.846 04:15:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:24.846 04:15:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:24.846 04:15:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:24.846 04:15:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:24.846 04:15:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:24.846 04:15:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.846 04:15:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.846 04:15:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.846 04:15:38 json_config -- paths/export.sh@5 -- # export PATH 00:04:24.846 04:15:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@51 -- # : 0 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:24.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:24.846 04:15:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:24.846 04:15:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:24.846 04:15:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:24.846 04:15:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:24.846 04:15:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:24.846 04:15:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:24.846 04:15:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:24.846 04:15:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:24.847 INFO: JSON configuration test init 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:24.847 04:15:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:24.847 04:15:38 json_config -- json_config/common.sh@9 -- # local app=target 00:04:24.847 04:15:38 json_config -- json_config/common.sh@10 -- # shift 00:04:24.847 04:15:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:24.847 04:15:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:24.847 04:15:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:24.847 04:15:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.847 04:15:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:24.847 04:15:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2756311 00:04:24.847 04:15:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:24.847 Waiting for target to run... 00:04:24.847 04:15:38 json_config -- json_config/common.sh@25 -- # waitforlisten 2756311 /var/tmp/spdk_tgt.sock 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@833 -- # '[' -z 2756311 ']' 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:24.847 04:15:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:24.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:24.847 04:15:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.108 [2024-11-05 04:15:38.492259] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:25.108 [2024-11-05 04:15:38.492316] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756311 ] 00:04:25.370 [2024-11-05 04:15:38.800428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.370 [2024-11-05 04:15:38.829876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.941 04:15:39 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:25.941 04:15:39 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:25.941 04:15:39 json_config -- json_config/common.sh@26 -- # echo '' 00:04:25.941 00:04:25.941 04:15:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:25.941 04:15:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:25.941 04:15:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.941 04:15:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.941 04:15:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:25.941 04:15:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:25.941 04:15:39 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.941 04:15:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.941 04:15:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:25.941 04:15:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:25.941 04:15:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.513 04:15:39 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:26.513 04:15:39 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.513 04:15:39 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.513 04:15:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.513 04:15:39 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.513 04:15:39 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.513 04:15:39 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.513 04:15:39 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:26.513 04:15:39 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:26.513 04:15:39 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:26.513 04:15:39 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:26.513 04:15:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@54 -- # sort 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:26.513 04:15:40 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.513 04:15:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:26.513 04:15:40 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.513 04:15:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:26.513 04:15:40 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.774 04:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.774 MallocForNvmf0 00:04:26.774 04:15:40 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:26.774 04:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:27.034 MallocForNvmf1 00:04:27.034 04:15:40 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.034 04:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.034 [2024-11-05 04:15:40.623866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.034 04:15:40 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.034 04:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.295 04:15:40 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.295 04:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.556 04:15:40 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.556 04:15:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:27.556 04:15:41 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.556 04:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:27.818 [2024-11-05 04:15:41.273999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:27.818 04:15:41 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:27.818 04:15:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.818 04:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.818 04:15:41 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:27.818 04:15:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:27.818 04:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.818 04:15:41 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:27.818 04:15:41 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:27.818 04:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.078 MallocBdevForConfigChangeCheck 00:04:28.078 04:15:41 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:28.078 04:15:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.078 04:15:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.078 04:15:41 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:28.078 04:15:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.340 04:15:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:28.340 INFO: shutting down applications... 00:04:28.340 04:15:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:28.340 04:15:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:28.340 04:15:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:28.340 04:15:41 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:28.911 Calling clear_iscsi_subsystem 00:04:28.911 Calling clear_nvmf_subsystem 00:04:28.911 Calling clear_nbd_subsystem 00:04:28.911 Calling clear_ublk_subsystem 00:04:28.911 Calling clear_vhost_blk_subsystem 00:04:28.911 Calling clear_vhost_scsi_subsystem 00:04:28.911 Calling clear_bdev_subsystem 00:04:28.911 04:15:42 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:28.911 04:15:42 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:28.911 04:15:42 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:28.911 04:15:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:28.911 04:15:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:28.911 04:15:42 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:29.172 04:15:42 json_config -- json_config/json_config.sh@352 -- # break 00:04:29.172 04:15:42 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:29.172 04:15:42 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:29.172 04:15:42 json_config -- json_config/common.sh@31 -- # local app=target 00:04:29.172 04:15:42 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:29.172 04:15:42 json_config -- json_config/common.sh@35 -- # [[ -n 2756311 ]] 00:04:29.172 04:15:42 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2756311 00:04:29.172 04:15:42 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:29.172 04:15:42 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.172 04:15:42 json_config -- json_config/common.sh@41 -- # kill -0 2756311 00:04:29.172 04:15:42 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:29.746 04:15:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:29.746 04:15:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:29.746 04:15:43 json_config -- json_config/common.sh@41 -- # kill -0 2756311 00:04:29.746 04:15:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:29.746 04:15:43 json_config -- json_config/common.sh@43 -- # break 00:04:29.746 04:15:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:29.746 04:15:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:29.746 SPDK target shutdown done 00:04:29.746 04:15:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:29.746 INFO: relaunching applications... 00:04:29.746 04:15:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.746 04:15:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:29.746 04:15:43 json_config -- json_config/common.sh@10 -- # shift 00:04:29.746 04:15:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:29.746 04:15:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:29.746 04:15:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:29.746 04:15:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.746 04:15:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.746 04:15:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2757422 00:04:29.746 04:15:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:29.746 Waiting for target to run... 00:04:29.746 04:15:43 json_config -- json_config/common.sh@25 -- # waitforlisten 2757422 /var/tmp/spdk_tgt.sock 00:04:29.746 04:15:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.746 04:15:43 json_config -- common/autotest_common.sh@833 -- # '[' -z 2757422 ']' 00:04:29.746 04:15:43 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:29.746 04:15:43 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:29.746 04:15:43 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:29.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:29.746 04:15:43 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:29.746 04:15:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.746 [2024-11-05 04:15:43.232462] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:29.746 [2024-11-05 04:15:43.232519] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757422 ] 00:04:30.007 [2024-11-05 04:15:43.517176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.007 [2024-11-05 04:15:43.546666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.578 [2024-11-05 04:15:44.061166] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:30.578 [2024-11-05 04:15:44.093553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:30.578 04:15:44 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:30.578 04:15:44 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:30.578 04:15:44 json_config -- json_config/common.sh@26 -- # echo '' 00:04:30.578 00:04:30.578 04:15:44 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:30.578 04:15:44 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:30.578 INFO: Checking if target configuration is the same... 00:04:30.578 04:15:44 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:30.578 04:15:44 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.578 04:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.578 + '[' 2 -ne 2 ']' 00:04:30.578 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:30.578 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:30.578 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:30.578 +++ basename /dev/fd/62 00:04:30.578 ++ mktemp /tmp/62.XXX 00:04:30.578 + tmp_file_1=/tmp/62.iZg 00:04:30.578 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:30.578 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:30.578 + tmp_file_2=/tmp/spdk_tgt_config.json.p9P 00:04:30.578 + ret=0 00:04:30.578 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:30.839 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.100 + diff -u /tmp/62.iZg /tmp/spdk_tgt_config.json.p9P 00:04:31.100 + echo 'INFO: JSON config files are the same' 00:04:31.100 INFO: JSON config files are the same 00:04:31.100 + rm /tmp/62.iZg /tmp/spdk_tgt_config.json.p9P 00:04:31.100 + exit 0 00:04:31.100 04:15:44 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:31.100 04:15:44 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:31.100 INFO: changing configuration and checking if this can be detected... 00:04:31.100 04:15:44 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.100 04:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:31.100 04:15:44 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.100 04:15:44 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:31.100 04:15:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:31.100 + '[' 2 -ne 2 ']' 00:04:31.100 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:31.100 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:31.100 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:31.100 +++ basename /dev/fd/62 00:04:31.100 ++ mktemp /tmp/62.XXX 00:04:31.100 + tmp_file_1=/tmp/62.sAt 00:04:31.100 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.100 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:31.100 + tmp_file_2=/tmp/spdk_tgt_config.json.myH 00:04:31.100 + ret=0 00:04:31.100 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.671 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:31.671 + diff -u /tmp/62.sAt /tmp/spdk_tgt_config.json.myH 00:04:31.671 + ret=1 00:04:31.671 + echo '=== Start of file: /tmp/62.sAt ===' 00:04:31.671 + cat /tmp/62.sAt 00:04:31.671 + echo '=== End of file: /tmp/62.sAt ===' 00:04:31.671 + echo '' 00:04:31.671 + echo '=== Start of file: /tmp/spdk_tgt_config.json.myH ===' 00:04:31.671 + cat /tmp/spdk_tgt_config.json.myH 00:04:31.671 + echo '=== End of file: /tmp/spdk_tgt_config.json.myH ===' 00:04:31.671 + echo '' 00:04:31.671 + rm /tmp/62.sAt /tmp/spdk_tgt_config.json.myH 00:04:31.671 + exit 1 00:04:31.671 04:15:45 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:31.671 INFO: configuration change detected. 00:04:31.671 04:15:45 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@324 -- # [[ -n 2757422 ]] 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.672 04:15:45 json_config -- json_config/json_config.sh@330 -- # killprocess 2757422 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@952 -- # '[' -z 2757422 ']' 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@956 -- # kill -0 2757422 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@957 -- # uname 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2757422 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2757422' 00:04:31.672 killing process with pid 2757422 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@971 -- # kill 2757422 00:04:31.672 04:15:45 json_config -- common/autotest_common.sh@976 -- # wait 2757422 00:04:31.933 04:15:45 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:31.933 04:15:45 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:31.933 04:15:45 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.933 04:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.933 04:15:45 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:31.933 04:15:45 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:31.933 INFO: Success 00:04:31.933 00:04:31.933 real 0m7.292s 00:04:31.933 user 0m8.777s 00:04:31.933 sys 0m1.929s 00:04:31.933 04:15:45 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:31.933 04:15:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.933 ************************************ 00:04:31.933 END TEST json_config 00:04:31.933 ************************************ 00:04:31.933 04:15:45 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:31.933 04:15:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:31.933 04:15:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:31.933 04:15:45 -- common/autotest_common.sh@10 -- # set +x 00:04:32.195 ************************************ 00:04:32.195 START TEST json_config_extra_key 00:04:32.195 ************************************ 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:32.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.195 --rc genhtml_branch_coverage=1 00:04:32.195 --rc genhtml_function_coverage=1 00:04:32.195 --rc genhtml_legend=1 00:04:32.195 --rc geninfo_all_blocks=1 00:04:32.195 --rc geninfo_unexecuted_blocks=1 00:04:32.195 00:04:32.195 ' 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:32.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.195 --rc genhtml_branch_coverage=1 00:04:32.195 --rc genhtml_function_coverage=1 00:04:32.195 --rc genhtml_legend=1 00:04:32.195 --rc geninfo_all_blocks=1 00:04:32.195 --rc geninfo_unexecuted_blocks=1 00:04:32.195 00:04:32.195 ' 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:32.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.195 --rc genhtml_branch_coverage=1 00:04:32.195 --rc genhtml_function_coverage=1 00:04:32.195 --rc genhtml_legend=1 00:04:32.195 --rc geninfo_all_blocks=1 00:04:32.195 --rc geninfo_unexecuted_blocks=1 00:04:32.195 00:04:32.195 ' 00:04:32.195 04:15:45 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:32.195 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:32.195 --rc genhtml_branch_coverage=1 00:04:32.195 --rc genhtml_function_coverage=1 00:04:32.195 --rc genhtml_legend=1 00:04:32.195 --rc geninfo_all_blocks=1 00:04:32.195 --rc geninfo_unexecuted_blocks=1 00:04:32.195 00:04:32.195 ' 00:04:32.195 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:32.195 04:15:45 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:32.195 04:15:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.195 04:15:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.195 04:15:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.195 04:15:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:32.195 04:15:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:32.195 04:15:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:32.195 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:32.196 04:15:45 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:32.196 04:15:45 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:32.196 04:15:45 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:32.196 INFO: launching applications... 00:04:32.196 04:15:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2757917 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.196 Waiting for target to run... 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2757917 /var/tmp/spdk_tgt.sock 00:04:32.196 04:15:45 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 2757917 ']' 00:04:32.196 04:15:45 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.196 04:15:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:32.196 04:15:45 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:32.196 04:15:45 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.196 04:15:45 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:32.196 04:15:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:32.457 [2024-11-05 04:15:45.861917] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:32.457 [2024-11-05 04:15:45.861992] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757917 ] 00:04:32.718 [2024-11-05 04:15:46.141276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.718 [2024-11-05 04:15:46.170408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.290 04:15:46 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:33.290 04:15:46 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:33.290 04:15:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:33.290 00:04:33.290 04:15:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:33.290 INFO: shutting down applications... 00:04:33.290 04:15:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:33.290 04:15:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:33.290 04:15:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.290 04:15:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2757917 ]] 00:04:33.290 04:15:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2757917 00:04:33.290 04:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.290 04:15:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.290 04:15:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2757917 00:04:33.290 04:15:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.551 04:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.551 04:15:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.551 04:15:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2757917 00:04:33.551 04:15:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.551 04:15:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:33.551 04:15:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.551 04:15:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.551 SPDK target shutdown done 00:04:33.551 04:15:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:33.551 Success 00:04:33.551 00:04:33.551 real 0m1.558s 00:04:33.552 user 0m1.196s 00:04:33.552 sys 0m0.390s 00:04:33.552 04:15:47 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:33.552 04:15:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:33.552 ************************************ 00:04:33.552 END TEST json_config_extra_key 00:04:33.552 ************************************ 00:04:33.552 04:15:47 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.552 04:15:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:33.813 04:15:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:33.813 04:15:47 -- common/autotest_common.sh@10 -- # set +x 00:04:33.813 ************************************ 00:04:33.813 START TEST alias_rpc 00:04:33.813 ************************************ 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:33.813 * Looking for test storage... 00:04:33.813 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.813 04:15:47 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:33.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.813 --rc genhtml_branch_coverage=1 00:04:33.813 --rc genhtml_function_coverage=1 00:04:33.813 --rc genhtml_legend=1 00:04:33.813 --rc geninfo_all_blocks=1 00:04:33.813 --rc geninfo_unexecuted_blocks=1 00:04:33.813 00:04:33.813 ' 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:33.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.813 --rc genhtml_branch_coverage=1 00:04:33.813 --rc genhtml_function_coverage=1 00:04:33.813 --rc genhtml_legend=1 00:04:33.813 --rc geninfo_all_blocks=1 00:04:33.813 --rc geninfo_unexecuted_blocks=1 00:04:33.813 00:04:33.813 ' 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:33.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.813 --rc genhtml_branch_coverage=1 00:04:33.813 --rc genhtml_function_coverage=1 00:04:33.813 --rc genhtml_legend=1 00:04:33.813 --rc geninfo_all_blocks=1 00:04:33.813 --rc geninfo_unexecuted_blocks=1 00:04:33.813 00:04:33.813 ' 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:33.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.813 --rc genhtml_branch_coverage=1 00:04:33.813 --rc genhtml_function_coverage=1 00:04:33.813 --rc genhtml_legend=1 00:04:33.813 --rc geninfo_all_blocks=1 00:04:33.813 --rc geninfo_unexecuted_blocks=1 00:04:33.813 00:04:33.813 ' 00:04:33.813 04:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:33.813 04:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2758316 00:04:33.813 04:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2758316 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 2758316 ']' 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.813 04:15:47 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:33.814 04:15:47 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.814 04:15:47 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:33.814 04:15:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.814 04:15:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.075 [2024-11-05 04:15:47.486065] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:34.075 [2024-11-05 04:15:47.486119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2758316 ] 00:04:34.075 [2024-11-05 04:15:47.557523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.075 [2024-11-05 04:15:47.593586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.647 04:15:48 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:34.647 04:15:48 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:34.647 04:15:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:34.908 04:15:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2758316 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 2758316 ']' 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 2758316 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2758316 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2758316' 00:04:34.908 killing process with pid 2758316 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@971 -- # kill 2758316 00:04:34.908 04:15:48 alias_rpc -- common/autotest_common.sh@976 -- # wait 2758316 00:04:35.169 00:04:35.169 real 0m1.507s 00:04:35.169 user 0m1.673s 00:04:35.169 sys 0m0.387s 00:04:35.169 04:15:48 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.169 04:15:48 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.169 ************************************ 00:04:35.169 END TEST alias_rpc 00:04:35.169 ************************************ 00:04:35.169 04:15:48 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:35.169 04:15:48 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.169 04:15:48 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.169 04:15:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.169 04:15:48 -- common/autotest_common.sh@10 -- # set +x 00:04:35.430 ************************************ 00:04:35.430 START TEST spdkcli_tcp 00:04:35.430 ************************************ 00:04:35.430 04:15:48 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:35.430 * Looking for test storage... 00:04:35.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:35.430 04:15:48 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:35.430 04:15:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:35.430 04:15:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:35.430 04:15:48 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:35.430 04:15:48 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:35.431 04:15:48 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:35.431 04:15:49 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:35.431 04:15:49 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:35.431 04:15:49 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:35.431 04:15:49 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:35.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.431 --rc genhtml_branch_coverage=1 00:04:35.431 --rc genhtml_function_coverage=1 00:04:35.431 --rc genhtml_legend=1 00:04:35.431 --rc geninfo_all_blocks=1 00:04:35.431 --rc geninfo_unexecuted_blocks=1 00:04:35.431 00:04:35.431 ' 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:35.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.431 --rc genhtml_branch_coverage=1 00:04:35.431 --rc genhtml_function_coverage=1 00:04:35.431 --rc genhtml_legend=1 00:04:35.431 --rc geninfo_all_blocks=1 00:04:35.431 --rc geninfo_unexecuted_blocks=1 00:04:35.431 00:04:35.431 ' 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:35.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.431 --rc genhtml_branch_coverage=1 00:04:35.431 --rc genhtml_function_coverage=1 00:04:35.431 --rc genhtml_legend=1 00:04:35.431 --rc geninfo_all_blocks=1 00:04:35.431 --rc geninfo_unexecuted_blocks=1 00:04:35.431 00:04:35.431 ' 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:35.431 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:35.431 --rc genhtml_branch_coverage=1 00:04:35.431 --rc genhtml_function_coverage=1 00:04:35.431 --rc genhtml_legend=1 00:04:35.431 --rc geninfo_all_blocks=1 00:04:35.431 --rc geninfo_unexecuted_blocks=1 00:04:35.431 00:04:35.431 ' 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2758710 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:35.431 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2758710 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 2758710 ']' 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:35.431 04:15:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.431 [2024-11-05 04:15:49.052158] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:35.431 [2024-11-05 04:15:49.052216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2758710 ] 00:04:35.692 [2024-11-05 04:15:49.121963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:35.692 [2024-11-05 04:15:49.158862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:35.692 [2024-11-05 04:15:49.158864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.692 04:15:49 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:35.692 04:15:49 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:35.692 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2758729 00:04:35.692 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:35.692 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:35.953 [ 00:04:35.953 "bdev_malloc_delete", 00:04:35.953 "bdev_malloc_create", 00:04:35.953 "bdev_null_resize", 00:04:35.953 "bdev_null_delete", 00:04:35.953 "bdev_null_create", 00:04:35.953 "bdev_nvme_cuse_unregister", 00:04:35.953 "bdev_nvme_cuse_register", 00:04:35.953 "bdev_opal_new_user", 00:04:35.953 "bdev_opal_set_lock_state", 00:04:35.953 "bdev_opal_delete", 00:04:35.953 "bdev_opal_get_info", 00:04:35.953 "bdev_opal_create", 00:04:35.953 "bdev_nvme_opal_revert", 00:04:35.953 "bdev_nvme_opal_init", 00:04:35.953 "bdev_nvme_send_cmd", 00:04:35.953 "bdev_nvme_set_keys", 00:04:35.953 "bdev_nvme_get_path_iostat", 00:04:35.953 "bdev_nvme_get_mdns_discovery_info", 00:04:35.953 "bdev_nvme_stop_mdns_discovery", 00:04:35.953 "bdev_nvme_start_mdns_discovery", 00:04:35.953 "bdev_nvme_set_multipath_policy", 00:04:35.953 "bdev_nvme_set_preferred_path", 00:04:35.953 "bdev_nvme_get_io_paths", 00:04:35.953 "bdev_nvme_remove_error_injection", 00:04:35.953 "bdev_nvme_add_error_injection", 00:04:35.953 "bdev_nvme_get_discovery_info", 00:04:35.953 "bdev_nvme_stop_discovery", 00:04:35.953 "bdev_nvme_start_discovery", 00:04:35.953 "bdev_nvme_get_controller_health_info", 00:04:35.953 "bdev_nvme_disable_controller", 00:04:35.953 "bdev_nvme_enable_controller", 00:04:35.953 "bdev_nvme_reset_controller", 00:04:35.953 "bdev_nvme_get_transport_statistics", 00:04:35.953 "bdev_nvme_apply_firmware", 00:04:35.953 "bdev_nvme_detach_controller", 00:04:35.953 "bdev_nvme_get_controllers", 00:04:35.953 "bdev_nvme_attach_controller", 00:04:35.953 "bdev_nvme_set_hotplug", 00:04:35.953 "bdev_nvme_set_options", 00:04:35.953 "bdev_passthru_delete", 00:04:35.953 "bdev_passthru_create", 00:04:35.954 "bdev_lvol_set_parent_bdev", 00:04:35.954 "bdev_lvol_set_parent", 00:04:35.954 "bdev_lvol_check_shallow_copy", 00:04:35.954 "bdev_lvol_start_shallow_copy", 00:04:35.954 "bdev_lvol_grow_lvstore", 00:04:35.954 "bdev_lvol_get_lvols", 00:04:35.954 "bdev_lvol_get_lvstores", 00:04:35.954 "bdev_lvol_delete", 00:04:35.954 "bdev_lvol_set_read_only", 00:04:35.954 "bdev_lvol_resize", 00:04:35.954 "bdev_lvol_decouple_parent", 00:04:35.954 "bdev_lvol_inflate", 00:04:35.954 "bdev_lvol_rename", 00:04:35.954 "bdev_lvol_clone_bdev", 00:04:35.954 "bdev_lvol_clone", 00:04:35.954 "bdev_lvol_snapshot", 00:04:35.954 "bdev_lvol_create", 00:04:35.954 "bdev_lvol_delete_lvstore", 00:04:35.954 "bdev_lvol_rename_lvstore", 00:04:35.954 "bdev_lvol_create_lvstore", 00:04:35.954 "bdev_raid_set_options", 00:04:35.954 "bdev_raid_remove_base_bdev", 00:04:35.954 "bdev_raid_add_base_bdev", 00:04:35.954 "bdev_raid_delete", 00:04:35.954 "bdev_raid_create", 00:04:35.954 "bdev_raid_get_bdevs", 00:04:35.954 "bdev_error_inject_error", 00:04:35.954 "bdev_error_delete", 00:04:35.954 "bdev_error_create", 00:04:35.954 "bdev_split_delete", 00:04:35.954 "bdev_split_create", 00:04:35.954 "bdev_delay_delete", 00:04:35.954 "bdev_delay_create", 00:04:35.954 "bdev_delay_update_latency", 00:04:35.954 "bdev_zone_block_delete", 00:04:35.954 "bdev_zone_block_create", 00:04:35.954 "blobfs_create", 00:04:35.954 "blobfs_detect", 00:04:35.954 "blobfs_set_cache_size", 00:04:35.954 "bdev_aio_delete", 00:04:35.954 "bdev_aio_rescan", 00:04:35.954 "bdev_aio_create", 00:04:35.954 "bdev_ftl_set_property", 00:04:35.954 "bdev_ftl_get_properties", 00:04:35.954 "bdev_ftl_get_stats", 00:04:35.954 "bdev_ftl_unmap", 00:04:35.954 "bdev_ftl_unload", 00:04:35.954 "bdev_ftl_delete", 00:04:35.954 "bdev_ftl_load", 00:04:35.954 "bdev_ftl_create", 00:04:35.954 "bdev_virtio_attach_controller", 00:04:35.954 "bdev_virtio_scsi_get_devices", 00:04:35.954 "bdev_virtio_detach_controller", 00:04:35.954 "bdev_virtio_blk_set_hotplug", 00:04:35.954 "bdev_iscsi_delete", 00:04:35.954 "bdev_iscsi_create", 00:04:35.954 "bdev_iscsi_set_options", 00:04:35.954 "accel_error_inject_error", 00:04:35.954 "ioat_scan_accel_module", 00:04:35.954 "dsa_scan_accel_module", 00:04:35.954 "iaa_scan_accel_module", 00:04:35.954 "vfu_virtio_create_fs_endpoint", 00:04:35.954 "vfu_virtio_create_scsi_endpoint", 00:04:35.954 "vfu_virtio_scsi_remove_target", 00:04:35.954 "vfu_virtio_scsi_add_target", 00:04:35.954 "vfu_virtio_create_blk_endpoint", 00:04:35.954 "vfu_virtio_delete_endpoint", 00:04:35.954 "keyring_file_remove_key", 00:04:35.954 "keyring_file_add_key", 00:04:35.954 "keyring_linux_set_options", 00:04:35.954 "fsdev_aio_delete", 00:04:35.954 "fsdev_aio_create", 00:04:35.954 "iscsi_get_histogram", 00:04:35.954 "iscsi_enable_histogram", 00:04:35.954 "iscsi_set_options", 00:04:35.954 "iscsi_get_auth_groups", 00:04:35.954 "iscsi_auth_group_remove_secret", 00:04:35.954 "iscsi_auth_group_add_secret", 00:04:35.954 "iscsi_delete_auth_group", 00:04:35.954 "iscsi_create_auth_group", 00:04:35.954 "iscsi_set_discovery_auth", 00:04:35.954 "iscsi_get_options", 00:04:35.954 "iscsi_target_node_request_logout", 00:04:35.954 "iscsi_target_node_set_redirect", 00:04:35.954 "iscsi_target_node_set_auth", 00:04:35.954 "iscsi_target_node_add_lun", 00:04:35.954 "iscsi_get_stats", 00:04:35.954 "iscsi_get_connections", 00:04:35.954 "iscsi_portal_group_set_auth", 00:04:35.954 "iscsi_start_portal_group", 00:04:35.954 "iscsi_delete_portal_group", 00:04:35.954 "iscsi_create_portal_group", 00:04:35.954 "iscsi_get_portal_groups", 00:04:35.954 "iscsi_delete_target_node", 00:04:35.954 "iscsi_target_node_remove_pg_ig_maps", 00:04:35.954 "iscsi_target_node_add_pg_ig_maps", 00:04:35.954 "iscsi_create_target_node", 00:04:35.954 "iscsi_get_target_nodes", 00:04:35.954 "iscsi_delete_initiator_group", 00:04:35.954 "iscsi_initiator_group_remove_initiators", 00:04:35.954 "iscsi_initiator_group_add_initiators", 00:04:35.954 "iscsi_create_initiator_group", 00:04:35.954 "iscsi_get_initiator_groups", 00:04:35.954 "nvmf_set_crdt", 00:04:35.954 "nvmf_set_config", 00:04:35.954 "nvmf_set_max_subsystems", 00:04:35.954 "nvmf_stop_mdns_prr", 00:04:35.954 "nvmf_publish_mdns_prr", 00:04:35.954 "nvmf_subsystem_get_listeners", 00:04:35.954 "nvmf_subsystem_get_qpairs", 00:04:35.954 "nvmf_subsystem_get_controllers", 00:04:35.954 "nvmf_get_stats", 00:04:35.954 "nvmf_get_transports", 00:04:35.954 "nvmf_create_transport", 00:04:35.954 "nvmf_get_targets", 00:04:35.954 "nvmf_delete_target", 00:04:35.954 "nvmf_create_target", 00:04:35.954 "nvmf_subsystem_allow_any_host", 00:04:35.954 "nvmf_subsystem_set_keys", 00:04:35.954 "nvmf_subsystem_remove_host", 00:04:35.954 "nvmf_subsystem_add_host", 00:04:35.954 "nvmf_ns_remove_host", 00:04:35.954 "nvmf_ns_add_host", 00:04:35.954 "nvmf_subsystem_remove_ns", 00:04:35.954 "nvmf_subsystem_set_ns_ana_group", 00:04:35.954 "nvmf_subsystem_add_ns", 00:04:35.954 "nvmf_subsystem_listener_set_ana_state", 00:04:35.954 "nvmf_discovery_get_referrals", 00:04:35.954 "nvmf_discovery_remove_referral", 00:04:35.954 "nvmf_discovery_add_referral", 00:04:35.954 "nvmf_subsystem_remove_listener", 00:04:35.954 "nvmf_subsystem_add_listener", 00:04:35.954 "nvmf_delete_subsystem", 00:04:35.954 "nvmf_create_subsystem", 00:04:35.954 "nvmf_get_subsystems", 00:04:35.954 "env_dpdk_get_mem_stats", 00:04:35.954 "nbd_get_disks", 00:04:35.954 "nbd_stop_disk", 00:04:35.954 "nbd_start_disk", 00:04:35.954 "ublk_recover_disk", 00:04:35.954 "ublk_get_disks", 00:04:35.954 "ublk_stop_disk", 00:04:35.954 "ublk_start_disk", 00:04:35.954 "ublk_destroy_target", 00:04:35.954 "ublk_create_target", 00:04:35.954 "virtio_blk_create_transport", 00:04:35.954 "virtio_blk_get_transports", 00:04:35.954 "vhost_controller_set_coalescing", 00:04:35.954 "vhost_get_controllers", 00:04:35.954 "vhost_delete_controller", 00:04:35.954 "vhost_create_blk_controller", 00:04:35.954 "vhost_scsi_controller_remove_target", 00:04:35.954 "vhost_scsi_controller_add_target", 00:04:35.954 "vhost_start_scsi_controller", 00:04:35.954 "vhost_create_scsi_controller", 00:04:35.954 "thread_set_cpumask", 00:04:35.954 "scheduler_set_options", 00:04:35.954 "framework_get_governor", 00:04:35.954 "framework_get_scheduler", 00:04:35.954 "framework_set_scheduler", 00:04:35.954 "framework_get_reactors", 00:04:35.954 "thread_get_io_channels", 00:04:35.954 "thread_get_pollers", 00:04:35.954 "thread_get_stats", 00:04:35.954 "framework_monitor_context_switch", 00:04:35.954 "spdk_kill_instance", 00:04:35.954 "log_enable_timestamps", 00:04:35.954 "log_get_flags", 00:04:35.954 "log_clear_flag", 00:04:35.954 "log_set_flag", 00:04:35.954 "log_get_level", 00:04:35.954 "log_set_level", 00:04:35.954 "log_get_print_level", 00:04:35.954 "log_set_print_level", 00:04:35.954 "framework_enable_cpumask_locks", 00:04:35.954 "framework_disable_cpumask_locks", 00:04:35.954 "framework_wait_init", 00:04:35.954 "framework_start_init", 00:04:35.954 "scsi_get_devices", 00:04:35.954 "bdev_get_histogram", 00:04:35.954 "bdev_enable_histogram", 00:04:35.954 "bdev_set_qos_limit", 00:04:35.954 "bdev_set_qd_sampling_period", 00:04:35.954 "bdev_get_bdevs", 00:04:35.954 "bdev_reset_iostat", 00:04:35.954 "bdev_get_iostat", 00:04:35.954 "bdev_examine", 00:04:35.954 "bdev_wait_for_examine", 00:04:35.954 "bdev_set_options", 00:04:35.954 "accel_get_stats", 00:04:35.954 "accel_set_options", 00:04:35.954 "accel_set_driver", 00:04:35.954 "accel_crypto_key_destroy", 00:04:35.954 "accel_crypto_keys_get", 00:04:35.954 "accel_crypto_key_create", 00:04:35.954 "accel_assign_opc", 00:04:35.954 "accel_get_module_info", 00:04:35.954 "accel_get_opc_assignments", 00:04:35.954 "vmd_rescan", 00:04:35.954 "vmd_remove_device", 00:04:35.954 "vmd_enable", 00:04:35.954 "sock_get_default_impl", 00:04:35.954 "sock_set_default_impl", 00:04:35.954 "sock_impl_set_options", 00:04:35.954 "sock_impl_get_options", 00:04:35.954 "iobuf_get_stats", 00:04:35.954 "iobuf_set_options", 00:04:35.954 "keyring_get_keys", 00:04:35.954 "vfu_tgt_set_base_path", 00:04:35.954 "framework_get_pci_devices", 00:04:35.954 "framework_get_config", 00:04:35.954 "framework_get_subsystems", 00:04:35.954 "fsdev_set_opts", 00:04:35.954 "fsdev_get_opts", 00:04:35.954 "trace_get_info", 00:04:35.954 "trace_get_tpoint_group_mask", 00:04:35.954 "trace_disable_tpoint_group", 00:04:35.954 "trace_enable_tpoint_group", 00:04:35.954 "trace_clear_tpoint_mask", 00:04:35.954 "trace_set_tpoint_mask", 00:04:35.954 "notify_get_notifications", 00:04:35.954 "notify_get_types", 00:04:35.954 "spdk_get_version", 00:04:35.954 "rpc_get_methods" 00:04:35.954 ] 00:04:35.954 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:35.954 04:15:49 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:35.954 04:15:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:35.954 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:35.954 04:15:49 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2758710 00:04:35.954 04:15:49 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 2758710 ']' 00:04:35.954 04:15:49 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 2758710 00:04:35.954 04:15:49 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:35.954 04:15:49 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:35.954 04:15:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2758710 00:04:36.216 04:15:49 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:36.216 04:15:49 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:36.216 04:15:49 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2758710' 00:04:36.216 killing process with pid 2758710 00:04:36.216 04:15:49 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 2758710 00:04:36.216 04:15:49 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 2758710 00:04:36.216 00:04:36.216 real 0m1.011s 00:04:36.216 user 0m1.684s 00:04:36.216 sys 0m0.398s 00:04:36.216 04:15:49 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:36.216 04:15:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:36.216 ************************************ 00:04:36.216 END TEST spdkcli_tcp 00:04:36.216 ************************************ 00:04:36.478 04:15:49 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.478 04:15:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:36.478 04:15:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:36.478 04:15:49 -- common/autotest_common.sh@10 -- # set +x 00:04:36.478 ************************************ 00:04:36.478 START TEST dpdk_mem_utility 00:04:36.478 ************************************ 00:04:36.478 04:15:49 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:36.478 * Looking for test storage... 00:04:36.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:36.478 04:15:49 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.478 04:15:50 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:36.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.478 --rc genhtml_branch_coverage=1 00:04:36.478 --rc genhtml_function_coverage=1 00:04:36.478 --rc genhtml_legend=1 00:04:36.478 --rc geninfo_all_blocks=1 00:04:36.478 --rc geninfo_unexecuted_blocks=1 00:04:36.478 00:04:36.478 ' 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:36.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.478 --rc genhtml_branch_coverage=1 00:04:36.478 --rc genhtml_function_coverage=1 00:04:36.478 --rc genhtml_legend=1 00:04:36.478 --rc geninfo_all_blocks=1 00:04:36.478 --rc geninfo_unexecuted_blocks=1 00:04:36.478 00:04:36.478 ' 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:36.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.478 --rc genhtml_branch_coverage=1 00:04:36.478 --rc genhtml_function_coverage=1 00:04:36.478 --rc genhtml_legend=1 00:04:36.478 --rc geninfo_all_blocks=1 00:04:36.478 --rc geninfo_unexecuted_blocks=1 00:04:36.478 00:04:36.478 ' 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:36.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.478 --rc genhtml_branch_coverage=1 00:04:36.478 --rc genhtml_function_coverage=1 00:04:36.478 --rc genhtml_legend=1 00:04:36.478 --rc geninfo_all_blocks=1 00:04:36.478 --rc geninfo_unexecuted_blocks=1 00:04:36.478 00:04:36.478 ' 00:04:36.478 04:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:36.478 04:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2759120 00:04:36.478 04:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2759120 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 2759120 ']' 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:36.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:36.478 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:36.478 04:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:36.739 [2024-11-05 04:15:50.158609] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:36.739 [2024-11-05 04:15:50.158694] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759120 ] 00:04:36.739 [2024-11-05 04:15:50.233474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.739 [2024-11-05 04:15:50.275647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.310 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:37.310 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:37.310 04:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:37.310 04:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:37.310 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:37.310 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.310 { 00:04:37.310 "filename": "/tmp/spdk_mem_dump.txt" 00:04:37.310 } 00:04:37.310 04:15:50 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:37.310 04:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:37.571 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:37.571 1 heaps totaling size 810.000000 MiB 00:04:37.571 size: 810.000000 MiB heap id: 0 00:04:37.571 end heaps---------- 00:04:37.571 9 mempools totaling size 595.772034 MiB 00:04:37.571 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:37.571 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:37.571 size: 92.545471 MiB name: bdev_io_2759120 00:04:37.571 size: 50.003479 MiB name: msgpool_2759120 00:04:37.571 size: 36.509338 MiB name: fsdev_io_2759120 00:04:37.571 size: 21.763794 MiB name: PDU_Pool 00:04:37.571 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:37.571 size: 4.133484 MiB name: evtpool_2759120 00:04:37.571 size: 0.026123 MiB name: Session_Pool 00:04:37.571 end mempools------- 00:04:37.571 6 memzones totaling size 4.142822 MiB 00:04:37.571 size: 1.000366 MiB name: RG_ring_0_2759120 00:04:37.571 size: 1.000366 MiB name: RG_ring_1_2759120 00:04:37.571 size: 1.000366 MiB name: RG_ring_4_2759120 00:04:37.571 size: 1.000366 MiB name: RG_ring_5_2759120 00:04:37.571 size: 0.125366 MiB name: RG_ring_2_2759120 00:04:37.571 size: 0.015991 MiB name: RG_ring_3_2759120 00:04:37.571 end memzones------- 00:04:37.571 04:15:50 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:37.571 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:37.571 list of free elements. size: 10.862488 MiB 00:04:37.571 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:37.571 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:37.571 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:37.571 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:37.571 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:37.571 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:37.571 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:37.571 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:37.571 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:37.571 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:37.571 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:37.571 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:37.571 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:37.571 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:37.571 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:37.571 list of standard malloc elements. size: 199.218628 MiB 00:04:37.571 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:37.571 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:37.571 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:37.571 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:37.571 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:37.571 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:37.571 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:37.572 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:37.572 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:37.572 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:37.572 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:37.572 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:37.572 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:37.572 list of memzone associated elements. size: 599.918884 MiB 00:04:37.572 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:37.572 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:37.572 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:37.572 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:37.572 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:37.572 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2759120_0 00:04:37.572 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:37.572 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2759120_0 00:04:37.572 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:37.572 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2759120_0 00:04:37.572 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:37.572 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:37.572 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:37.572 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:37.572 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:37.572 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2759120_0 00:04:37.572 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:37.572 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2759120 00:04:37.572 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:37.572 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2759120 00:04:37.572 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:37.572 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:37.572 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:37.572 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:37.572 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:37.572 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:37.572 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:37.572 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:37.572 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:37.572 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2759120 00:04:37.572 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:37.572 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2759120 00:04:37.572 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:37.572 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2759120 00:04:37.572 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:37.572 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2759120 00:04:37.572 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:37.572 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2759120 00:04:37.572 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:37.572 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2759120 00:04:37.572 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:37.572 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:37.572 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:37.572 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:37.572 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:37.572 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:37.572 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:37.572 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2759120 00:04:37.572 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:37.572 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2759120 00:04:37.572 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:37.572 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:37.572 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:37.572 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:37.572 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:37.572 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2759120 00:04:37.572 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:37.572 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:37.572 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:37.572 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2759120 00:04:37.572 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:37.572 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2759120 00:04:37.572 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:37.572 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2759120 00:04:37.572 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:37.572 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:37.572 04:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:37.572 04:15:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2759120 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 2759120 ']' 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 2759120 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2759120 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2759120' 00:04:37.572 killing process with pid 2759120 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 2759120 00:04:37.572 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 2759120 00:04:37.834 00:04:37.834 real 0m1.406s 00:04:37.834 user 0m1.469s 00:04:37.834 sys 0m0.405s 00:04:37.834 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:37.834 04:15:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:37.834 ************************************ 00:04:37.834 END TEST dpdk_mem_utility 00:04:37.834 ************************************ 00:04:37.834 04:15:51 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:37.834 04:15:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.834 04:15:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.834 04:15:51 -- common/autotest_common.sh@10 -- # set +x 00:04:37.834 ************************************ 00:04:37.834 START TEST event 00:04:37.834 ************************************ 00:04:37.834 04:15:51 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:37.834 * Looking for test storage... 00:04:38.096 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:38.096 04:15:51 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:38.096 04:15:51 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:38.096 04:15:51 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:38.096 04:15:51 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:38.096 04:15:51 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:38.096 04:15:51 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:38.096 04:15:51 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:38.096 04:15:51 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:38.096 04:15:51 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:38.096 04:15:51 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:38.096 04:15:51 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:38.096 04:15:51 event -- scripts/common.sh@344 -- # case "$op" in 00:04:38.096 04:15:51 event -- scripts/common.sh@345 -- # : 1 00:04:38.096 04:15:51 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:38.096 04:15:51 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:38.096 04:15:51 event -- scripts/common.sh@365 -- # decimal 1 00:04:38.096 04:15:51 event -- scripts/common.sh@353 -- # local d=1 00:04:38.096 04:15:51 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:38.096 04:15:51 event -- scripts/common.sh@355 -- # echo 1 00:04:38.096 04:15:51 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:38.096 04:15:51 event -- scripts/common.sh@366 -- # decimal 2 00:04:38.096 04:15:51 event -- scripts/common.sh@353 -- # local d=2 00:04:38.096 04:15:51 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:38.096 04:15:51 event -- scripts/common.sh@355 -- # echo 2 00:04:38.096 04:15:51 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:38.096 04:15:51 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:38.096 04:15:51 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:38.096 04:15:51 event -- scripts/common.sh@368 -- # return 0 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.096 --rc genhtml_branch_coverage=1 00:04:38.096 --rc genhtml_function_coverage=1 00:04:38.096 --rc genhtml_legend=1 00:04:38.096 --rc geninfo_all_blocks=1 00:04:38.096 --rc geninfo_unexecuted_blocks=1 00:04:38.096 00:04:38.096 ' 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.096 --rc genhtml_branch_coverage=1 00:04:38.096 --rc genhtml_function_coverage=1 00:04:38.096 --rc genhtml_legend=1 00:04:38.096 --rc geninfo_all_blocks=1 00:04:38.096 --rc geninfo_unexecuted_blocks=1 00:04:38.096 00:04:38.096 ' 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.096 --rc genhtml_branch_coverage=1 00:04:38.096 --rc genhtml_function_coverage=1 00:04:38.096 --rc genhtml_legend=1 00:04:38.096 --rc geninfo_all_blocks=1 00:04:38.096 --rc geninfo_unexecuted_blocks=1 00:04:38.096 00:04:38.096 ' 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:38.096 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:38.096 --rc genhtml_branch_coverage=1 00:04:38.096 --rc genhtml_function_coverage=1 00:04:38.096 --rc genhtml_legend=1 00:04:38.096 --rc geninfo_all_blocks=1 00:04:38.096 --rc geninfo_unexecuted_blocks=1 00:04:38.096 00:04:38.096 ' 00:04:38.096 04:15:51 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:38.096 04:15:51 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:38.096 04:15:51 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:38.096 04:15:51 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:38.096 04:15:51 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.096 ************************************ 00:04:38.096 START TEST event_perf 00:04:38.096 ************************************ 00:04:38.096 04:15:51 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:38.096 Running I/O for 1 seconds...[2024-11-05 04:15:51.648596] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:38.097 [2024-11-05 04:15:51.648693] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759498 ] 00:04:38.097 [2024-11-05 04:15:51.726076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:38.357 [2024-11-05 04:15:51.765872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:38.357 [2024-11-05 04:15:51.766037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:38.357 [2024-11-05 04:15:51.766191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:38.357 [2024-11-05 04:15:51.766192] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.300 Running I/O for 1 seconds... 00:04:39.300 lcore 0: 180870 00:04:39.300 lcore 1: 180870 00:04:39.300 lcore 2: 180868 00:04:39.300 lcore 3: 180871 00:04:39.300 done. 00:04:39.300 00:04:39.300 real 0m1.173s 00:04:39.300 user 0m4.095s 00:04:39.300 sys 0m0.078s 00:04:39.300 04:15:52 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:39.300 04:15:52 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.300 ************************************ 00:04:39.300 END TEST event_perf 00:04:39.300 ************************************ 00:04:39.300 04:15:52 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:39.300 04:15:52 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:39.300 04:15:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:39.300 04:15:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:39.300 ************************************ 00:04:39.300 START TEST event_reactor 00:04:39.300 ************************************ 00:04:39.300 04:15:52 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:39.300 [2024-11-05 04:15:52.899312] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:39.300 [2024-11-05 04:15:52.899415] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759630 ] 00:04:39.560 [2024-11-05 04:15:52.973390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.560 [2024-11-05 04:15:53.008958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.501 test_start 00:04:40.501 oneshot 00:04:40.501 tick 100 00:04:40.501 tick 100 00:04:40.501 tick 250 00:04:40.501 tick 100 00:04:40.501 tick 100 00:04:40.501 tick 250 00:04:40.501 tick 100 00:04:40.501 tick 500 00:04:40.501 tick 100 00:04:40.501 tick 100 00:04:40.501 tick 250 00:04:40.501 tick 100 00:04:40.501 tick 100 00:04:40.501 test_end 00:04:40.501 00:04:40.501 real 0m1.163s 00:04:40.501 user 0m1.097s 00:04:40.501 sys 0m0.063s 00:04:40.501 04:15:54 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:40.501 04:15:54 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:40.501 ************************************ 00:04:40.501 END TEST event_reactor 00:04:40.501 ************************************ 00:04:40.501 04:15:54 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:40.501 04:15:54 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:04:40.501 04:15:54 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:40.501 04:15:54 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.501 ************************************ 00:04:40.501 START TEST event_reactor_perf 00:04:40.501 ************************************ 00:04:40.501 04:15:54 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:40.762 [2024-11-05 04:15:54.143002] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:40.762 [2024-11-05 04:15:54.143102] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759907 ] 00:04:40.762 [2024-11-05 04:15:54.219864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.762 [2024-11-05 04:15:54.255944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.703 test_start 00:04:41.703 test_end 00:04:41.703 Performance: 366952 events per second 00:04:41.703 00:04:41.703 real 0m1.168s 00:04:41.703 user 0m1.093s 00:04:41.703 sys 0m0.071s 00:04:41.703 04:15:55 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.703 04:15:55 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:41.703 ************************************ 00:04:41.703 END TEST event_reactor_perf 00:04:41.703 ************************************ 00:04:41.703 04:15:55 event -- event/event.sh@49 -- # uname -s 00:04:41.703 04:15:55 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:41.703 04:15:55 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:41.703 04:15:55 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.703 04:15:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.703 04:15:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.964 ************************************ 00:04:41.964 START TEST event_scheduler 00:04:41.964 ************************************ 00:04:41.964 04:15:55 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:41.964 * Looking for test storage... 00:04:41.964 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:41.964 04:15:55 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:41.964 04:15:55 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:04:41.964 04:15:55 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.965 04:15:55 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.965 --rc genhtml_branch_coverage=1 00:04:41.965 --rc genhtml_function_coverage=1 00:04:41.965 --rc genhtml_legend=1 00:04:41.965 --rc geninfo_all_blocks=1 00:04:41.965 --rc geninfo_unexecuted_blocks=1 00:04:41.965 00:04:41.965 ' 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.965 --rc genhtml_branch_coverage=1 00:04:41.965 --rc genhtml_function_coverage=1 00:04:41.965 --rc genhtml_legend=1 00:04:41.965 --rc geninfo_all_blocks=1 00:04:41.965 --rc geninfo_unexecuted_blocks=1 00:04:41.965 00:04:41.965 ' 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.965 --rc genhtml_branch_coverage=1 00:04:41.965 --rc genhtml_function_coverage=1 00:04:41.965 --rc genhtml_legend=1 00:04:41.965 --rc geninfo_all_blocks=1 00:04:41.965 --rc geninfo_unexecuted_blocks=1 00:04:41.965 00:04:41.965 ' 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:41.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.965 --rc genhtml_branch_coverage=1 00:04:41.965 --rc genhtml_function_coverage=1 00:04:41.965 --rc genhtml_legend=1 00:04:41.965 --rc geninfo_all_blocks=1 00:04:41.965 --rc geninfo_unexecuted_blocks=1 00:04:41.965 00:04:41.965 ' 00:04:41.965 04:15:55 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:41.965 04:15:55 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2760295 00:04:41.965 04:15:55 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.965 04:15:55 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:41.965 04:15:55 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2760295 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 2760295 ']' 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.965 04:15:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.225 [2024-11-05 04:15:55.619636] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:42.225 [2024-11-05 04:15:55.619686] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760295 ] 00:04:42.225 [2024-11-05 04:15:55.678361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:42.225 [2024-11-05 04:15:55.709758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.225 [2024-11-05 04:15:55.709915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.225 [2024-11-05 04:15:55.710071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:42.225 [2024-11-05 04:15:55.710073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:42.225 04:15:55 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.225 04:15:55 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:04:42.226 04:15:55 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:42.226 04:15:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.226 04:15:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.226 [2024-11-05 04:15:55.766516] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:42.226 [2024-11-05 04:15:55.766531] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:42.226 [2024-11-05 04:15:55.766538] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:42.226 [2024-11-05 04:15:55.766543] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:42.226 [2024-11-05 04:15:55.766547] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:42.226 04:15:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.226 04:15:55 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:42.226 04:15:55 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.226 04:15:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.226 [2024-11-05 04:15:55.826927] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:42.226 04:15:55 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.226 04:15:55 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:42.226 04:15:55 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.226 04:15:55 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.226 04:15:55 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:42.486 ************************************ 00:04:42.486 START TEST scheduler_create_thread 00:04:42.486 ************************************ 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.486 2 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.486 3 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.486 4 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.486 5 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.486 6 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.486 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.486 7 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.487 8 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:42.487 9 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:42.487 04:15:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:43.057 10 00:04:43.057 04:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:43.057 04:15:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:43.057 04:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:43.057 04:15:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:44.440 04:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.440 04:15:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:44.440 04:15:57 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:44.440 04:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.440 04:15:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.010 04:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.010 04:15:58 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:45.010 04:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.010 04:15:58 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:45.951 04:15:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.951 04:15:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:45.951 04:15:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:45.951 04:15:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.951 04:15:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.521 04:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.521 00:04:46.521 real 0m4.226s 00:04:46.521 user 0m0.024s 00:04:46.521 sys 0m0.007s 00:04:46.521 04:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.521 04:16:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:46.521 ************************************ 00:04:46.521 END TEST scheduler_create_thread 00:04:46.521 ************************************ 00:04:46.521 04:16:00 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:46.521 04:16:00 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2760295 00:04:46.521 04:16:00 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 2760295 ']' 00:04:46.521 04:16:00 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 2760295 00:04:46.521 04:16:00 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:04:46.521 04:16:00 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:46.521 04:16:00 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2760295 00:04:46.781 04:16:00 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:04:46.781 04:16:00 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:04:46.781 04:16:00 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2760295' 00:04:46.781 killing process with pid 2760295 00:04:46.781 04:16:00 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 2760295 00:04:46.781 04:16:00 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 2760295 00:04:46.781 [2024-11-05 04:16:00.372200] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:47.042 00:04:47.042 real 0m5.159s 00:04:47.042 user 0m10.270s 00:04:47.042 sys 0m0.352s 00:04:47.042 04:16:00 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:47.042 04:16:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:47.042 ************************************ 00:04:47.042 END TEST event_scheduler 00:04:47.042 ************************************ 00:04:47.042 04:16:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:47.042 04:16:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:47.042 04:16:00 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:47.042 04:16:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:47.042 04:16:00 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.042 ************************************ 00:04:47.042 START TEST app_repeat 00:04:47.042 ************************************ 00:04:47.042 04:16:00 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2761359 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2761359' 00:04:47.042 Process app_repeat pid: 2761359 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:47.042 spdk_app_start Round 0 00:04:47.042 04:16:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2761359 /var/tmp/spdk-nbd.sock 00:04:47.042 04:16:00 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2761359 ']' 00:04:47.042 04:16:00 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.042 04:16:00 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:47.042 04:16:00 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.042 04:16:00 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:47.042 04:16:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.042 [2024-11-05 04:16:00.654628] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:04:47.042 [2024-11-05 04:16:00.654696] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761359 ] 00:04:47.302 [2024-11-05 04:16:00.728735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:47.302 [2024-11-05 04:16:00.768674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:47.302 [2024-11-05 04:16:00.768678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.302 04:16:00 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:47.302 04:16:00 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:47.302 04:16:00 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.563 Malloc0 00:04:47.563 04:16:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:47.823 Malloc1 00:04:47.823 04:16:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.823 04:16:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.823 04:16:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.823 04:16:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:47.823 04:16:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:47.824 /dev/nbd0 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:47.824 1+0 records in 00:04:47.824 1+0 records out 00:04:47.824 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000275495 s, 14.9 MB/s 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:47.824 04:16:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:47.824 04:16:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.085 /dev/nbd1 00:04:48.085 04:16:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:48.085 04:16:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.085 1+0 records in 00:04:48.085 1+0 records out 00:04:48.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223379 s, 18.3 MB/s 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:48.085 04:16:01 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:48.085 04:16:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.085 04:16:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.085 04:16:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.085 04:16:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.085 04:16:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:48.346 { 00:04:48.346 "nbd_device": "/dev/nbd0", 00:04:48.346 "bdev_name": "Malloc0" 00:04:48.346 }, 00:04:48.346 { 00:04:48.346 "nbd_device": "/dev/nbd1", 00:04:48.346 "bdev_name": "Malloc1" 00:04:48.346 } 00:04:48.346 ]' 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:48.346 { 00:04:48.346 "nbd_device": "/dev/nbd0", 00:04:48.346 "bdev_name": "Malloc0" 00:04:48.346 }, 00:04:48.346 { 00:04:48.346 "nbd_device": "/dev/nbd1", 00:04:48.346 "bdev_name": "Malloc1" 00:04:48.346 } 00:04:48.346 ]' 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:48.346 /dev/nbd1' 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:48.346 /dev/nbd1' 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:48.346 256+0 records in 00:04:48.346 256+0 records out 00:04:48.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012708 s, 82.5 MB/s 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:48.346 256+0 records in 00:04:48.346 256+0 records out 00:04:48.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162527 s, 64.5 MB/s 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:48.346 256+0 records in 00:04:48.346 256+0 records out 00:04:48.346 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230409 s, 45.5 MB/s 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.346 04:16:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.607 04:16:01 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:48.607 04:16:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:48.867 04:16:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.868 04:16:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:49.128 04:16:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:49.128 04:16:02 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:49.389 04:16:02 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:49.389 [2024-11-05 04:16:02.895701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:49.389 [2024-11-05 04:16:02.930507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.389 [2024-11-05 04:16:02.930508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.389 [2024-11-05 04:16:02.962093] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:49.389 [2024-11-05 04:16:02.962129] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:52.689 04:16:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:52.689 04:16:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:52.689 spdk_app_start Round 1 00:04:52.689 04:16:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2761359 /var/tmp/spdk-nbd.sock 00:04:52.689 04:16:05 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2761359 ']' 00:04:52.689 04:16:05 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:52.689 04:16:05 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.689 04:16:05 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:52.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:52.689 04:16:05 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.689 04:16:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:52.689 04:16:05 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:52.689 04:16:05 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:52.689 04:16:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.689 Malloc0 00:04:52.689 04:16:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.689 Malloc1 00:04:52.689 04:16:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.689 04:16:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.950 /dev/nbd0 00:04:52.950 04:16:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.950 04:16:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.950 1+0 records in 00:04:52.950 1+0 records out 00:04:52.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276408 s, 14.8 MB/s 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:52.950 04:16:06 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:52.950 04:16:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.950 04:16:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.950 04:16:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:53.211 /dev/nbd1 00:04:53.211 04:16:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:53.211 04:16:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:53.211 1+0 records in 00:04:53.211 1+0 records out 00:04:53.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175806 s, 23.3 MB/s 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:53.211 04:16:06 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:53.211 04:16:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:53.211 04:16:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:53.211 04:16:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.211 04:16:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.211 04:16:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:53.472 { 00:04:53.472 "nbd_device": "/dev/nbd0", 00:04:53.472 "bdev_name": "Malloc0" 00:04:53.472 }, 00:04:53.472 { 00:04:53.472 "nbd_device": "/dev/nbd1", 00:04:53.472 "bdev_name": "Malloc1" 00:04:53.472 } 00:04:53.472 ]' 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:53.472 { 00:04:53.472 "nbd_device": "/dev/nbd0", 00:04:53.472 "bdev_name": "Malloc0" 00:04:53.472 }, 00:04:53.472 { 00:04:53.472 "nbd_device": "/dev/nbd1", 00:04:53.472 "bdev_name": "Malloc1" 00:04:53.472 } 00:04:53.472 ]' 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:53.472 /dev/nbd1' 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:53.472 /dev/nbd1' 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:53.472 04:16:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:53.472 256+0 records in 00:04:53.472 256+0 records out 00:04:53.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117083 s, 89.6 MB/s 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.472 256+0 records in 00:04:53.472 256+0 records out 00:04:53.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163451 s, 64.2 MB/s 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.472 256+0 records in 00:04:53.472 256+0 records out 00:04:53.472 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0187786 s, 55.8 MB/s 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.472 04:16:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.732 04:16:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.993 04:16:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.253 04:16:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:54.253 04:16:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:54.253 04:16:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.253 04:16:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:54.253 04:16:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:54.253 04:16:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:54.253 04:16:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:54.253 04:16:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:54.253 04:16:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:54.253 04:16:07 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.253 04:16:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.514 [2024-11-05 04:16:07.957794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.514 [2024-11-05 04:16:07.992547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.514 [2024-11-05 04:16:07.992549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.514 [2024-11-05 04:16:08.024937] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.514 [2024-11-05 04:16:08.024970] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.890 04:16:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.890 04:16:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:57.890 spdk_app_start Round 2 00:04:57.890 04:16:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2761359 /var/tmp/spdk-nbd.sock 00:04:57.890 04:16:10 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2761359 ']' 00:04:57.890 04:16:10 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.890 04:16:10 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.890 04:16:10 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.890 04:16:10 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.890 04:16:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.890 04:16:11 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:57.890 04:16:11 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:04:57.890 04:16:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.890 Malloc0 00:04:57.890 04:16:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.890 Malloc1 00:04:57.890 04:16:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.890 04:16:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.890 04:16:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.890 04:16:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.890 04:16:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.890 04:16:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.891 04:16:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.151 /dev/nbd0 00:04:58.151 04:16:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.151 04:16:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.151 1+0 records in 00:04:58.151 1+0 records out 00:04:58.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000274298 s, 14.9 MB/s 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:58.151 04:16:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:58.151 04:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.151 04:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.151 04:16:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.411 /dev/nbd1 00:04:58.411 04:16:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.411 04:16:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.411 1+0 records in 00:04:58.411 1+0 records out 00:04:58.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260437 s, 15.7 MB/s 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:04:58.411 04:16:11 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:04:58.411 04:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.411 04:16:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.411 04:16:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.411 04:16:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.411 04:16:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.411 04:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.411 { 00:04:58.411 "nbd_device": "/dev/nbd0", 00:04:58.411 "bdev_name": "Malloc0" 00:04:58.411 }, 00:04:58.411 { 00:04:58.411 "nbd_device": "/dev/nbd1", 00:04:58.411 "bdev_name": "Malloc1" 00:04:58.411 } 00:04:58.411 ]' 00:04:58.411 04:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.411 { 00:04:58.411 "nbd_device": "/dev/nbd0", 00:04:58.411 "bdev_name": "Malloc0" 00:04:58.411 }, 00:04:58.411 { 00:04:58.411 "nbd_device": "/dev/nbd1", 00:04:58.411 "bdev_name": "Malloc1" 00:04:58.411 } 00:04:58.411 ]' 00:04:58.411 04:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.672 /dev/nbd1' 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.672 /dev/nbd1' 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.672 256+0 records in 00:04:58.672 256+0 records out 00:04:58.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0127072 s, 82.5 MB/s 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.672 256+0 records in 00:04:58.672 256+0 records out 00:04:58.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0176829 s, 59.3 MB/s 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.672 256+0 records in 00:04:58.672 256+0 records out 00:04:58.672 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175038 s, 59.9 MB/s 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.672 04:16:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.673 04:16:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.673 04:16:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.933 04:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.193 04:16:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.193 04:16:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.453 04:16:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:59.453 [2024-11-05 04:16:13.065022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:59.714 [2024-11-05 04:16:13.099339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.714 [2024-11-05 04:16:13.099341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.714 [2024-11-05 04:16:13.131031] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:59.714 [2024-11-05 04:16:13.131071] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:03.015 04:16:15 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2761359 /var/tmp/spdk-nbd.sock 00:05:03.015 04:16:15 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 2761359 ']' 00:05:03.015 04:16:15 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:03.015 04:16:15 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.015 04:16:15 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:03.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:03.015 04:16:15 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.015 04:16:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:03.015 04:16:16 event.app_repeat -- event/event.sh@39 -- # killprocess 2761359 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 2761359 ']' 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 2761359 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2761359 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2761359' 00:05:03.015 killing process with pid 2761359 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@971 -- # kill 2761359 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@976 -- # wait 2761359 00:05:03.015 spdk_app_start is called in Round 0. 00:05:03.015 Shutdown signal received, stop current app iteration 00:05:03.015 Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 reinitialization... 00:05:03.015 spdk_app_start is called in Round 1. 00:05:03.015 Shutdown signal received, stop current app iteration 00:05:03.015 Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 reinitialization... 00:05:03.015 spdk_app_start is called in Round 2. 00:05:03.015 Shutdown signal received, stop current app iteration 00:05:03.015 Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 reinitialization... 00:05:03.015 spdk_app_start is called in Round 3. 00:05:03.015 Shutdown signal received, stop current app iteration 00:05:03.015 04:16:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:03.015 04:16:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:03.015 00:05:03.015 real 0m15.688s 00:05:03.015 user 0m34.266s 00:05:03.015 sys 0m2.227s 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.015 04:16:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.015 ************************************ 00:05:03.015 END TEST app_repeat 00:05:03.015 ************************************ 00:05:03.015 04:16:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:03.015 04:16:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:03.015 04:16:16 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.015 04:16:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.015 04:16:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.015 ************************************ 00:05:03.015 START TEST cpu_locks 00:05:03.015 ************************************ 00:05:03.015 04:16:16 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:03.015 * Looking for test storage... 00:05:03.015 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.016 04:16:16 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.016 --rc genhtml_branch_coverage=1 00:05:03.016 --rc genhtml_function_coverage=1 00:05:03.016 --rc genhtml_legend=1 00:05:03.016 --rc geninfo_all_blocks=1 00:05:03.016 --rc geninfo_unexecuted_blocks=1 00:05:03.016 00:05:03.016 ' 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.016 --rc genhtml_branch_coverage=1 00:05:03.016 --rc genhtml_function_coverage=1 00:05:03.016 --rc genhtml_legend=1 00:05:03.016 --rc geninfo_all_blocks=1 00:05:03.016 --rc geninfo_unexecuted_blocks=1 00:05:03.016 00:05:03.016 ' 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.016 --rc genhtml_branch_coverage=1 00:05:03.016 --rc genhtml_function_coverage=1 00:05:03.016 --rc genhtml_legend=1 00:05:03.016 --rc geninfo_all_blocks=1 00:05:03.016 --rc geninfo_unexecuted_blocks=1 00:05:03.016 00:05:03.016 ' 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.016 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.016 --rc genhtml_branch_coverage=1 00:05:03.016 --rc genhtml_function_coverage=1 00:05:03.016 --rc genhtml_legend=1 00:05:03.016 --rc geninfo_all_blocks=1 00:05:03.016 --rc geninfo_unexecuted_blocks=1 00:05:03.016 00:05:03.016 ' 00:05:03.016 04:16:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:03.016 04:16:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:03.016 04:16:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:03.016 04:16:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.016 04:16:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.016 ************************************ 00:05:03.016 START TEST default_locks 00:05:03.016 ************************************ 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2764749 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2764749 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2764749 ']' 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.016 04:16:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.277 [2024-11-05 04:16:16.700865] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:03.277 [2024-11-05 04:16:16.700934] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2764749 ] 00:05:03.277 [2024-11-05 04:16:16.778752] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.277 [2024-11-05 04:16:16.821498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.220 04:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.220 04:16:17 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:04.220 04:16:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2764749 00:05:04.220 04:16:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2764749 00:05:04.220 04:16:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:04.480 lslocks: write error 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2764749 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 2764749 ']' 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 2764749 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2764749 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2764749' 00:05:04.480 killing process with pid 2764749 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 2764749 00:05:04.480 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 2764749 00:05:04.741 04:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2764749 00:05:04.741 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2764749 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2764749 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 2764749 ']' 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:04.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2764749) - No such process 00:05:04.742 ERROR: process (pid: 2764749) is no longer running 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:04.742 00:05:04.742 real 0m1.682s 00:05:04.742 user 0m1.809s 00:05:04.742 sys 0m0.580s 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:04.742 04:16:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:04.742 ************************************ 00:05:04.742 END TEST default_locks 00:05:04.742 ************************************ 00:05:04.742 04:16:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:04.742 04:16:18 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:04.742 04:16:18 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:04.742 04:16:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:05.002 ************************************ 00:05:05.002 START TEST default_locks_via_rpc 00:05:05.002 ************************************ 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2765124 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2765124 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2765124 ']' 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.002 04:16:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.002 [2024-11-05 04:16:18.444754] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:05.002 [2024-11-05 04:16:18.444805] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765124 ] 00:05:05.002 [2024-11-05 04:16:18.517510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.002 [2024-11-05 04:16:18.553816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2765124 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2765124 00:05:05.946 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2765124 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 2765124 ']' 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 2765124 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2765124 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2765124' 00:05:06.214 killing process with pid 2765124 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 2765124 00:05:06.214 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 2765124 00:05:06.476 00:05:06.476 real 0m1.556s 00:05:06.476 user 0m1.711s 00:05:06.476 sys 0m0.503s 00:05:06.476 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.476 04:16:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.476 ************************************ 00:05:06.476 END TEST default_locks_via_rpc 00:05:06.476 ************************************ 00:05:06.476 04:16:19 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:06.476 04:16:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.476 04:16:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.476 04:16:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.476 ************************************ 00:05:06.476 START TEST non_locking_app_on_locked_coremask 00:05:06.476 ************************************ 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2765460 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2765460 /var/tmp/spdk.sock 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2765460 ']' 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.476 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.476 [2024-11-05 04:16:20.078439] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:06.476 [2024-11-05 04:16:20.078494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765460 ] 00:05:06.738 [2024-11-05 04:16:20.149120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.738 [2024-11-05 04:16:20.185428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2765691 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2765691 /var/tmp/spdk2.sock 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2765691 ']' 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.310 04:16:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.310 [2024-11-05 04:16:20.912199] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:07.310 [2024-11-05 04:16:20.912255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2765691 ] 00:05:07.571 [2024-11-05 04:16:21.021793] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:07.571 [2024-11-05 04:16:21.021821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.571 [2024-11-05 04:16:21.094230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.142 04:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.142 04:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:08.142 04:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2765460 00:05:08.142 04:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:08.142 04:16:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2765460 00:05:08.713 lslocks: write error 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2765460 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2765460 ']' 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2765460 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2765460 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2765460' 00:05:08.713 killing process with pid 2765460 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2765460 00:05:08.713 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2765460 00:05:08.974 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2765691 00:05:08.974 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2765691 ']' 00:05:08.974 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2765691 00:05:08.974 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:08.974 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:08.974 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2765691 00:05:09.234 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:09.234 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:09.234 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2765691' 00:05:09.235 killing process with pid 2765691 00:05:09.235 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2765691 00:05:09.235 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2765691 00:05:09.235 00:05:09.235 real 0m2.816s 00:05:09.235 user 0m3.131s 00:05:09.235 sys 0m0.826s 00:05:09.235 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.235 04:16:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.235 ************************************ 00:05:09.235 END TEST non_locking_app_on_locked_coremask 00:05:09.235 ************************************ 00:05:09.235 04:16:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:09.235 04:16:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.235 04:16:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.235 04:16:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.495 ************************************ 00:05:09.495 START TEST locking_app_on_unlocked_coremask 00:05:09.495 ************************************ 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2766068 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2766068 /var/tmp/spdk.sock 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2766068 ']' 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:09.495 04:16:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:09.495 [2024-11-05 04:16:22.967594] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:09.496 [2024-11-05 04:16:22.967649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2766068 ] 00:05:09.496 [2024-11-05 04:16:23.043173] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:09.496 [2024-11-05 04:16:23.043206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.496 [2024-11-05 04:16:23.082819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2766370 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2766370 /var/tmp/spdk2.sock 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2766370 ']' 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.438 04:16:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.438 [2024-11-05 04:16:23.825136] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:10.438 [2024-11-05 04:16:23.825193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2766370 ] 00:05:10.438 [2024-11-05 04:16:23.937149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.438 [2024-11-05 04:16:24.009350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.009 04:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:11.009 04:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:11.009 04:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2766370 00:05:11.009 04:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.009 04:16:24 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2766370 00:05:11.581 lslocks: write error 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2766068 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2766068 ']' 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2766068 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2766068 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2766068' 00:05:11.581 killing process with pid 2766068 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2766068 00:05:11.581 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2766068 00:05:11.841 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2766370 00:05:11.841 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2766370 ']' 00:05:11.841 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 2766370 00:05:11.841 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:12.102 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:12.102 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2766370 00:05:12.102 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:12.102 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:12.102 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2766370' 00:05:12.102 killing process with pid 2766370 00:05:12.102 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 2766370 00:05:12.102 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 2766370 00:05:12.363 00:05:12.363 real 0m2.837s 00:05:12.363 user 0m3.151s 00:05:12.363 sys 0m0.864s 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.363 ************************************ 00:05:12.363 END TEST locking_app_on_unlocked_coremask 00:05:12.363 ************************************ 00:05:12.363 04:16:25 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:12.363 04:16:25 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.363 04:16:25 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.363 04:16:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.363 ************************************ 00:05:12.363 START TEST locking_app_on_locked_coremask 00:05:12.363 ************************************ 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2766774 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2766774 /var/tmp/spdk.sock 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2766774 ']' 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:12.363 04:16:25 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.363 [2024-11-05 04:16:25.883006] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:12.363 [2024-11-05 04:16:25.883059] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2766774 ] 00:05:12.363 [2024-11-05 04:16:25.953329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.363 [2024-11-05 04:16:25.990016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2766826 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2766826 /var/tmp/spdk2.sock 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2766826 /var/tmp/spdk2.sock 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2766826 /var/tmp/spdk2.sock 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 2766826 ']' 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:13.306 04:16:26 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.306 [2024-11-05 04:16:26.719222] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:13.306 [2024-11-05 04:16:26.719277] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2766826 ] 00:05:13.306 [2024-11-05 04:16:26.831015] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2766774 has claimed it. 00:05:13.306 [2024-11-05 04:16:26.831056] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.878 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2766826) - No such process 00:05:13.878 ERROR: process (pid: 2766826) is no longer running 00:05:13.878 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.878 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:13.878 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:13.878 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:13.878 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:13.878 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:13.878 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2766774 00:05:13.878 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2766774 00:05:13.878 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.139 lslocks: write error 00:05:14.139 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2766774 00:05:14.139 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 2766774 ']' 00:05:14.139 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 2766774 00:05:14.139 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:14.139 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.139 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2766774 00:05:14.400 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:14.400 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:14.400 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2766774' 00:05:14.400 killing process with pid 2766774 00:05:14.400 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 2766774 00:05:14.400 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 2766774 00:05:14.400 00:05:14.400 real 0m2.174s 00:05:14.400 user 0m2.467s 00:05:14.400 sys 0m0.587s 00:05:14.400 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.400 04:16:27 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.400 ************************************ 00:05:14.400 END TEST locking_app_on_locked_coremask 00:05:14.400 ************************************ 00:05:14.660 04:16:28 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:14.660 04:16:28 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.660 04:16:28 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.660 04:16:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.660 ************************************ 00:05:14.660 START TEST locking_overlapped_coremask 00:05:14.660 ************************************ 00:05:14.660 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:14.660 04:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2767153 00:05:14.660 04:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2767153 /var/tmp/spdk.sock 00:05:14.660 04:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:14.660 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2767153 ']' 00:05:14.660 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.660 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.661 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.661 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.661 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.661 [2024-11-05 04:16:28.133716] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:14.661 [2024-11-05 04:16:28.133777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767153 ] 00:05:14.661 [2024-11-05 04:16:28.205211] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:14.661 [2024-11-05 04:16:28.245462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.661 [2024-11-05 04:16:28.245577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.661 [2024-11-05 04:16:28.245580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2767487 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2767487 /var/tmp/spdk2.sock 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2767487 /var/tmp/spdk2.sock 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2767487 /var/tmp/spdk2.sock 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 2767487 ']' 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:15.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:15.604 04:16:28 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.604 [2024-11-05 04:16:28.973725] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:15.604 [2024-11-05 04:16:28.973785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767487 ] 00:05:15.604 [2024-11-05 04:16:29.061908] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2767153 has claimed it. 00:05:15.604 [2024-11-05 04:16:29.061943] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:16.175 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (2767487) - No such process 00:05:16.175 ERROR: process (pid: 2767487) is no longer running 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2767153 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 2767153 ']' 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 2767153 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2767153 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2767153' 00:05:16.175 killing process with pid 2767153 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 2767153 00:05:16.175 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 2767153 00:05:16.436 00:05:16.436 real 0m1.786s 00:05:16.436 user 0m5.181s 00:05:16.436 sys 0m0.362s 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.436 ************************************ 00:05:16.436 END TEST locking_overlapped_coremask 00:05:16.436 ************************************ 00:05:16.436 04:16:29 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:16.436 04:16:29 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:16.436 04:16:29 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:16.436 04:16:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.436 ************************************ 00:05:16.436 START TEST locking_overlapped_coremask_via_rpc 00:05:16.436 ************************************ 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2767540 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2767540 /var/tmp/spdk.sock 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2767540 ']' 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.436 04:16:29 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.436 [2024-11-05 04:16:29.996925] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:16.436 [2024-11-05 04:16:29.996978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767540 ] 00:05:16.436 [2024-11-05 04:16:30.072361] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:16.436 [2024-11-05 04:16:30.072398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:16.697 [2024-11-05 04:16:30.113669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.697 [2024-11-05 04:16:30.113765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:16.697 [2024-11-05 04:16:30.113769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2767861 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2767861 /var/tmp/spdk2.sock 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2767861 ']' 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:17.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:17.269 04:16:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.269 [2024-11-05 04:16:30.851320] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:17.269 [2024-11-05 04:16:30.851374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2767861 ] 00:05:17.530 [2024-11-05 04:16:30.940257] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:17.530 [2024-11-05 04:16:30.940282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:17.530 [2024-11-05 04:16:31.003552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.530 [2024-11-05 04:16:31.006867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:17.530 [2024-11-05 04:16:31.006870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.101 [2024-11-05 04:16:31.651812] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2767540 has claimed it. 00:05:18.101 request: 00:05:18.101 { 00:05:18.101 "method": "framework_enable_cpumask_locks", 00:05:18.101 "req_id": 1 00:05:18.101 } 00:05:18.101 Got JSON-RPC error response 00:05:18.101 response: 00:05:18.101 { 00:05:18.101 "code": -32603, 00:05:18.101 "message": "Failed to claim CPU core: 2" 00:05:18.101 } 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.101 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.102 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.102 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2767540 /var/tmp/spdk.sock 00:05:18.102 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2767540 ']' 00:05:18.102 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.102 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.102 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.102 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.102 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.362 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.362 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:18.362 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2767861 /var/tmp/spdk2.sock 00:05:18.362 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 2767861 ']' 00:05:18.362 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:18.362 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.362 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:18.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:18.362 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.362 04:16:31 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.624 04:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:18.624 04:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:18.624 04:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:18.624 04:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:18.624 04:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:18.624 04:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:18.624 00:05:18.624 real 0m2.097s 00:05:18.624 user 0m0.874s 00:05:18.624 sys 0m0.143s 00:05:18.624 04:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.624 04:16:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.624 ************************************ 00:05:18.624 END TEST locking_overlapped_coremask_via_rpc 00:05:18.624 ************************************ 00:05:18.624 04:16:32 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:18.624 04:16:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2767540 ]] 00:05:18.624 04:16:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2767540 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2767540 ']' 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2767540 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2767540 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2767540' 00:05:18.624 killing process with pid 2767540 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2767540 00:05:18.624 04:16:32 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2767540 00:05:18.885 04:16:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2767861 ]] 00:05:18.885 04:16:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2767861 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2767861 ']' 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2767861 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2767861 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2767861' 00:05:18.885 killing process with pid 2767861 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 2767861 00:05:18.885 04:16:32 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 2767861 00:05:19.146 04:16:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.146 04:16:32 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:19.146 04:16:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2767540 ]] 00:05:19.146 04:16:32 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2767540 00:05:19.146 04:16:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2767540 ']' 00:05:19.146 04:16:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2767540 00:05:19.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2767540) - No such process 00:05:19.146 04:16:32 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2767540 is not found' 00:05:19.146 Process with pid 2767540 is not found 00:05:19.146 04:16:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2767861 ]] 00:05:19.146 04:16:32 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2767861 00:05:19.146 04:16:32 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 2767861 ']' 00:05:19.146 04:16:32 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 2767861 00:05:19.146 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2767861) - No such process 00:05:19.146 04:16:32 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 2767861 is not found' 00:05:19.146 Process with pid 2767861 is not found 00:05:19.146 04:16:32 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:19.146 00:05:19.146 real 0m16.228s 00:05:19.146 user 0m28.559s 00:05:19.146 sys 0m4.801s 00:05:19.146 04:16:32 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.146 04:16:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.146 ************************************ 00:05:19.146 END TEST cpu_locks 00:05:19.146 ************************************ 00:05:19.146 00:05:19.146 real 0m41.274s 00:05:19.146 user 1m19.670s 00:05:19.146 sys 0m8.029s 00:05:19.146 04:16:32 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.146 04:16:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.146 ************************************ 00:05:19.146 END TEST event 00:05:19.146 ************************************ 00:05:19.146 04:16:32 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:19.146 04:16:32 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.146 04:16:32 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.146 04:16:32 -- common/autotest_common.sh@10 -- # set +x 00:05:19.146 ************************************ 00:05:19.146 START TEST thread 00:05:19.146 ************************************ 00:05:19.146 04:16:32 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:19.408 * Looking for test storage... 00:05:19.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:19.408 04:16:32 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:19.408 04:16:32 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:19.408 04:16:32 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:19.408 04:16:32 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:19.408 04:16:32 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:19.408 04:16:32 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:19.408 04:16:32 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:19.408 04:16:32 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:19.408 04:16:32 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:19.408 04:16:32 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:19.408 04:16:32 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:19.408 04:16:32 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:19.408 04:16:32 thread -- scripts/common.sh@345 -- # : 1 00:05:19.408 04:16:32 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:19.408 04:16:32 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:19.408 04:16:32 thread -- scripts/common.sh@365 -- # decimal 1 00:05:19.408 04:16:32 thread -- scripts/common.sh@353 -- # local d=1 00:05:19.408 04:16:32 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:19.408 04:16:32 thread -- scripts/common.sh@355 -- # echo 1 00:05:19.408 04:16:32 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:19.408 04:16:32 thread -- scripts/common.sh@366 -- # decimal 2 00:05:19.408 04:16:32 thread -- scripts/common.sh@353 -- # local d=2 00:05:19.408 04:16:32 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:19.408 04:16:32 thread -- scripts/common.sh@355 -- # echo 2 00:05:19.408 04:16:32 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:19.408 04:16:32 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:19.408 04:16:32 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:19.408 04:16:32 thread -- scripts/common.sh@368 -- # return 0 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:19.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.408 --rc genhtml_branch_coverage=1 00:05:19.408 --rc genhtml_function_coverage=1 00:05:19.408 --rc genhtml_legend=1 00:05:19.408 --rc geninfo_all_blocks=1 00:05:19.408 --rc geninfo_unexecuted_blocks=1 00:05:19.408 00:05:19.408 ' 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:19.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.408 --rc genhtml_branch_coverage=1 00:05:19.408 --rc genhtml_function_coverage=1 00:05:19.408 --rc genhtml_legend=1 00:05:19.408 --rc geninfo_all_blocks=1 00:05:19.408 --rc geninfo_unexecuted_blocks=1 00:05:19.408 00:05:19.408 ' 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:19.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.408 --rc genhtml_branch_coverage=1 00:05:19.408 --rc genhtml_function_coverage=1 00:05:19.408 --rc genhtml_legend=1 00:05:19.408 --rc geninfo_all_blocks=1 00:05:19.408 --rc geninfo_unexecuted_blocks=1 00:05:19.408 00:05:19.408 ' 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:19.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:19.408 --rc genhtml_branch_coverage=1 00:05:19.408 --rc genhtml_function_coverage=1 00:05:19.408 --rc genhtml_legend=1 00:05:19.408 --rc geninfo_all_blocks=1 00:05:19.408 --rc geninfo_unexecuted_blocks=1 00:05:19.408 00:05:19.408 ' 00:05:19.408 04:16:32 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.408 04:16:32 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.408 ************************************ 00:05:19.408 START TEST thread_poller_perf 00:05:19.408 ************************************ 00:05:19.408 04:16:32 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:19.408 [2024-11-05 04:16:32.991154] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:19.408 [2024-11-05 04:16:32.991258] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768328 ] 00:05:19.670 [2024-11-05 04:16:33.066555] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.670 [2024-11-05 04:16:33.102362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.670 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:20.612 [2024-11-05T03:16:34.252Z] ====================================== 00:05:20.612 [2024-11-05T03:16:34.252Z] busy:2411212060 (cyc) 00:05:20.612 [2024-11-05T03:16:34.252Z] total_run_count: 285000 00:05:20.612 [2024-11-05T03:16:34.252Z] tsc_hz: 2400000000 (cyc) 00:05:20.612 [2024-11-05T03:16:34.252Z] ====================================== 00:05:20.612 [2024-11-05T03:16:34.252Z] poller_cost: 8460 (cyc), 3525 (nsec) 00:05:20.612 00:05:20.612 real 0m1.174s 00:05:20.612 user 0m1.101s 00:05:20.612 sys 0m0.069s 00:05:20.612 04:16:34 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.612 04:16:34 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.612 ************************************ 00:05:20.612 END TEST thread_poller_perf 00:05:20.612 ************************************ 00:05:20.612 04:16:34 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.612 04:16:34 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:20.612 04:16:34 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.612 04:16:34 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.612 ************************************ 00:05:20.612 START TEST thread_poller_perf 00:05:20.612 ************************************ 00:05:20.612 04:16:34 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:20.612 [2024-11-05 04:16:34.240594] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:20.612 [2024-11-05 04:16:34.240700] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2768678 ] 00:05:20.872 [2024-11-05 04:16:34.315083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.872 [2024-11-05 04:16:34.348662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.872 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:21.813 [2024-11-05T03:16:35.453Z] ====================================== 00:05:21.813 [2024-11-05T03:16:35.453Z] busy:2402060668 (cyc) 00:05:21.813 [2024-11-05T03:16:35.453Z] total_run_count: 3815000 00:05:21.813 [2024-11-05T03:16:35.453Z] tsc_hz: 2400000000 (cyc) 00:05:21.813 [2024-11-05T03:16:35.453Z] ====================================== 00:05:21.813 [2024-11-05T03:16:35.453Z] poller_cost: 629 (cyc), 262 (nsec) 00:05:21.813 00:05:21.813 real 0m1.162s 00:05:21.813 user 0m1.096s 00:05:21.813 sys 0m0.063s 00:05:21.813 04:16:35 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.813 04:16:35 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:21.813 ************************************ 00:05:21.813 END TEST thread_poller_perf 00:05:21.813 ************************************ 00:05:21.813 04:16:35 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:21.813 00:05:21.813 real 0m2.692s 00:05:21.813 user 0m2.376s 00:05:21.813 sys 0m0.330s 00:05:21.813 04:16:35 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.813 04:16:35 thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.813 ************************************ 00:05:21.813 END TEST thread 00:05:21.813 ************************************ 00:05:22.074 04:16:35 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:22.074 04:16:35 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.074 04:16:35 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:22.074 04:16:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.074 04:16:35 -- common/autotest_common.sh@10 -- # set +x 00:05:22.074 ************************************ 00:05:22.074 START TEST app_cmdline 00:05:22.074 ************************************ 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:22.074 * Looking for test storage... 00:05:22.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.074 04:16:35 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:22.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.074 --rc genhtml_branch_coverage=1 00:05:22.074 --rc genhtml_function_coverage=1 00:05:22.074 --rc genhtml_legend=1 00:05:22.074 --rc geninfo_all_blocks=1 00:05:22.074 --rc geninfo_unexecuted_blocks=1 00:05:22.074 00:05:22.074 ' 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:22.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.074 --rc genhtml_branch_coverage=1 00:05:22.074 --rc genhtml_function_coverage=1 00:05:22.074 --rc genhtml_legend=1 00:05:22.074 --rc geninfo_all_blocks=1 00:05:22.074 --rc geninfo_unexecuted_blocks=1 00:05:22.074 00:05:22.074 ' 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:22.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.074 --rc genhtml_branch_coverage=1 00:05:22.074 --rc genhtml_function_coverage=1 00:05:22.074 --rc genhtml_legend=1 00:05:22.074 --rc geninfo_all_blocks=1 00:05:22.074 --rc geninfo_unexecuted_blocks=1 00:05:22.074 00:05:22.074 ' 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:22.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.074 --rc genhtml_branch_coverage=1 00:05:22.074 --rc genhtml_function_coverage=1 00:05:22.074 --rc genhtml_legend=1 00:05:22.074 --rc geninfo_all_blocks=1 00:05:22.074 --rc geninfo_unexecuted_blocks=1 00:05:22.074 00:05:22.074 ' 00:05:22.074 04:16:35 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:22.074 04:16:35 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2769082 00:05:22.074 04:16:35 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2769082 00:05:22.074 04:16:35 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 2769082 ']' 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.074 04:16:35 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.335 [2024-11-05 04:16:35.763711] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:22.335 [2024-11-05 04:16:35.763788] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2769082 ] 00:05:22.335 [2024-11-05 04:16:35.838020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.335 [2024-11-05 04:16:35.873844] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.595 04:16:36 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:22.595 04:16:36 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:22.595 04:16:36 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:22.595 { 00:05:22.595 "version": "SPDK v25.01-pre git sha1 d0fd7ad59", 00:05:22.595 "fields": { 00:05:22.595 "major": 25, 00:05:22.595 "minor": 1, 00:05:22.595 "patch": 0, 00:05:22.595 "suffix": "-pre", 00:05:22.595 "commit": "d0fd7ad59" 00:05:22.595 } 00:05:22.595 } 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:22.856 request: 00:05:22.856 { 00:05:22.856 "method": "env_dpdk_get_mem_stats", 00:05:22.856 "req_id": 1 00:05:22.856 } 00:05:22.856 Got JSON-RPC error response 00:05:22.856 response: 00:05:22.856 { 00:05:22.856 "code": -32601, 00:05:22.856 "message": "Method not found" 00:05:22.856 } 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.856 04:16:36 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2769082 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 2769082 ']' 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 2769082 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:22.856 04:16:36 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2769082 00:05:23.116 04:16:36 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:23.116 04:16:36 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:23.116 04:16:36 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2769082' 00:05:23.116 killing process with pid 2769082 00:05:23.116 04:16:36 app_cmdline -- common/autotest_common.sh@971 -- # kill 2769082 00:05:23.116 04:16:36 app_cmdline -- common/autotest_common.sh@976 -- # wait 2769082 00:05:23.116 00:05:23.116 real 0m1.241s 00:05:23.116 user 0m1.539s 00:05:23.116 sys 0m0.414s 00:05:23.116 04:16:36 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.116 04:16:36 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:23.116 ************************************ 00:05:23.116 END TEST app_cmdline 00:05:23.116 ************************************ 00:05:23.377 04:16:36 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.377 04:16:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.377 04:16:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.377 04:16:36 -- common/autotest_common.sh@10 -- # set +x 00:05:23.377 ************************************ 00:05:23.377 START TEST version 00:05:23.377 ************************************ 00:05:23.377 04:16:36 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:23.377 * Looking for test storage... 00:05:23.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:23.377 04:16:36 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.377 04:16:36 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.377 04:16:36 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.377 04:16:36 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.377 04:16:36 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.377 04:16:37 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.377 04:16:37 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.377 04:16:37 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.377 04:16:37 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.377 04:16:37 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.377 04:16:37 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.377 04:16:37 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.377 04:16:37 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.377 04:16:37 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.377 04:16:37 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.377 04:16:37 version -- scripts/common.sh@344 -- # case "$op" in 00:05:23.377 04:16:37 version -- scripts/common.sh@345 -- # : 1 00:05:23.377 04:16:37 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.377 04:16:37 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.377 04:16:37 version -- scripts/common.sh@365 -- # decimal 1 00:05:23.377 04:16:37 version -- scripts/common.sh@353 -- # local d=1 00:05:23.377 04:16:37 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.377 04:16:37 version -- scripts/common.sh@355 -- # echo 1 00:05:23.377 04:16:37 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.377 04:16:37 version -- scripts/common.sh@366 -- # decimal 2 00:05:23.377 04:16:37 version -- scripts/common.sh@353 -- # local d=2 00:05:23.377 04:16:37 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.377 04:16:37 version -- scripts/common.sh@355 -- # echo 2 00:05:23.639 04:16:37 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.639 04:16:37 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.639 04:16:37 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.639 04:16:37 version -- scripts/common.sh@368 -- # return 0 00:05:23.639 04:16:37 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.639 04:16:37 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.639 --rc genhtml_branch_coverage=1 00:05:23.639 --rc genhtml_function_coverage=1 00:05:23.639 --rc genhtml_legend=1 00:05:23.639 --rc geninfo_all_blocks=1 00:05:23.639 --rc geninfo_unexecuted_blocks=1 00:05:23.639 00:05:23.639 ' 00:05:23.639 04:16:37 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.639 --rc genhtml_branch_coverage=1 00:05:23.639 --rc genhtml_function_coverage=1 00:05:23.639 --rc genhtml_legend=1 00:05:23.639 --rc geninfo_all_blocks=1 00:05:23.639 --rc geninfo_unexecuted_blocks=1 00:05:23.639 00:05:23.639 ' 00:05:23.639 04:16:37 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.639 --rc genhtml_branch_coverage=1 00:05:23.639 --rc genhtml_function_coverage=1 00:05:23.639 --rc genhtml_legend=1 00:05:23.639 --rc geninfo_all_blocks=1 00:05:23.639 --rc geninfo_unexecuted_blocks=1 00:05:23.639 00:05:23.639 ' 00:05:23.639 04:16:37 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.639 --rc genhtml_branch_coverage=1 00:05:23.639 --rc genhtml_function_coverage=1 00:05:23.639 --rc genhtml_legend=1 00:05:23.639 --rc geninfo_all_blocks=1 00:05:23.639 --rc geninfo_unexecuted_blocks=1 00:05:23.639 00:05:23.639 ' 00:05:23.639 04:16:37 version -- app/version.sh@17 -- # get_header_version major 00:05:23.639 04:16:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.639 04:16:37 version -- app/version.sh@14 -- # cut -f2 00:05:23.639 04:16:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.639 04:16:37 version -- app/version.sh@17 -- # major=25 00:05:23.639 04:16:37 version -- app/version.sh@18 -- # get_header_version minor 00:05:23.639 04:16:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.639 04:16:37 version -- app/version.sh@14 -- # cut -f2 00:05:23.639 04:16:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.639 04:16:37 version -- app/version.sh@18 -- # minor=1 00:05:23.639 04:16:37 version -- app/version.sh@19 -- # get_header_version patch 00:05:23.639 04:16:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.639 04:16:37 version -- app/version.sh@14 -- # cut -f2 00:05:23.639 04:16:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.639 04:16:37 version -- app/version.sh@19 -- # patch=0 00:05:23.639 04:16:37 version -- app/version.sh@20 -- # get_header_version suffix 00:05:23.639 04:16:37 version -- app/version.sh@14 -- # cut -f2 00:05:23.639 04:16:37 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:23.639 04:16:37 version -- app/version.sh@14 -- # tr -d '"' 00:05:23.639 04:16:37 version -- app/version.sh@20 -- # suffix=-pre 00:05:23.639 04:16:37 version -- app/version.sh@22 -- # version=25.1 00:05:23.639 04:16:37 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:23.639 04:16:37 version -- app/version.sh@28 -- # version=25.1rc0 00:05:23.639 04:16:37 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:23.639 04:16:37 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:23.639 04:16:37 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:23.639 04:16:37 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:23.639 00:05:23.639 real 0m0.283s 00:05:23.639 user 0m0.165s 00:05:23.639 sys 0m0.165s 00:05:23.639 04:16:37 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.639 04:16:37 version -- common/autotest_common.sh@10 -- # set +x 00:05:23.639 ************************************ 00:05:23.639 END TEST version 00:05:23.639 ************************************ 00:05:23.639 04:16:37 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:23.639 04:16:37 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:23.639 04:16:37 -- spdk/autotest.sh@194 -- # uname -s 00:05:23.639 04:16:37 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:23.639 04:16:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.639 04:16:37 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:23.639 04:16:37 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:23.639 04:16:37 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:05:23.639 04:16:37 -- spdk/autotest.sh@256 -- # timing_exit lib 00:05:23.639 04:16:37 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:23.639 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:23.639 04:16:37 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:05:23.639 04:16:37 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:05:23.639 04:16:37 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:05:23.639 04:16:37 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:05:23.639 04:16:37 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:05:23.639 04:16:37 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:05:23.639 04:16:37 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:23.639 04:16:37 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:23.639 04:16:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.639 04:16:37 -- common/autotest_common.sh@10 -- # set +x 00:05:23.639 ************************************ 00:05:23.639 START TEST nvmf_tcp 00:05:23.639 ************************************ 00:05:23.639 04:16:37 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:23.900 * Looking for test storage... 00:05:23.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.900 04:16:37 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.900 --rc genhtml_branch_coverage=1 00:05:23.900 --rc genhtml_function_coverage=1 00:05:23.900 --rc genhtml_legend=1 00:05:23.900 --rc geninfo_all_blocks=1 00:05:23.900 --rc geninfo_unexecuted_blocks=1 00:05:23.900 00:05:23.900 ' 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.900 --rc genhtml_branch_coverage=1 00:05:23.900 --rc genhtml_function_coverage=1 00:05:23.900 --rc genhtml_legend=1 00:05:23.900 --rc geninfo_all_blocks=1 00:05:23.900 --rc geninfo_unexecuted_blocks=1 00:05:23.900 00:05:23.900 ' 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.900 --rc genhtml_branch_coverage=1 00:05:23.900 --rc genhtml_function_coverage=1 00:05:23.900 --rc genhtml_legend=1 00:05:23.900 --rc geninfo_all_blocks=1 00:05:23.900 --rc geninfo_unexecuted_blocks=1 00:05:23.900 00:05:23.900 ' 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.900 --rc genhtml_branch_coverage=1 00:05:23.900 --rc genhtml_function_coverage=1 00:05:23.900 --rc genhtml_legend=1 00:05:23.900 --rc geninfo_all_blocks=1 00:05:23.900 --rc geninfo_unexecuted_blocks=1 00:05:23.900 00:05:23.900 ' 00:05:23.900 04:16:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:23.900 04:16:37 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:23.900 04:16:37 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.900 04:16:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:23.900 ************************************ 00:05:23.900 START TEST nvmf_target_core 00:05:23.900 ************************************ 00:05:23.901 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:24.162 * Looking for test storage... 00:05:24.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:24.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.162 --rc genhtml_branch_coverage=1 00:05:24.162 --rc genhtml_function_coverage=1 00:05:24.162 --rc genhtml_legend=1 00:05:24.162 --rc geninfo_all_blocks=1 00:05:24.162 --rc geninfo_unexecuted_blocks=1 00:05:24.162 00:05:24.162 ' 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:24.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.162 --rc genhtml_branch_coverage=1 00:05:24.162 --rc genhtml_function_coverage=1 00:05:24.162 --rc genhtml_legend=1 00:05:24.162 --rc geninfo_all_blocks=1 00:05:24.162 --rc geninfo_unexecuted_blocks=1 00:05:24.162 00:05:24.162 ' 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:24.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.162 --rc genhtml_branch_coverage=1 00:05:24.162 --rc genhtml_function_coverage=1 00:05:24.162 --rc genhtml_legend=1 00:05:24.162 --rc geninfo_all_blocks=1 00:05:24.162 --rc geninfo_unexecuted_blocks=1 00:05:24.162 00:05:24.162 ' 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:24.162 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.162 --rc genhtml_branch_coverage=1 00:05:24.162 --rc genhtml_function_coverage=1 00:05:24.162 --rc genhtml_legend=1 00:05:24.162 --rc geninfo_all_blocks=1 00:05:24.162 --rc geninfo_unexecuted_blocks=1 00:05:24.162 00:05:24.162 ' 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:24.162 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:24.163 ************************************ 00:05:24.163 START TEST nvmf_abort 00:05:24.163 ************************************ 00:05:24.163 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:24.425 * Looking for test storage... 00:05:24.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:24.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.425 --rc genhtml_branch_coverage=1 00:05:24.425 --rc genhtml_function_coverage=1 00:05:24.425 --rc genhtml_legend=1 00:05:24.425 --rc geninfo_all_blocks=1 00:05:24.425 --rc geninfo_unexecuted_blocks=1 00:05:24.425 00:05:24.425 ' 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:24.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.425 --rc genhtml_branch_coverage=1 00:05:24.425 --rc genhtml_function_coverage=1 00:05:24.425 --rc genhtml_legend=1 00:05:24.425 --rc geninfo_all_blocks=1 00:05:24.425 --rc geninfo_unexecuted_blocks=1 00:05:24.425 00:05:24.425 ' 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:24.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.425 --rc genhtml_branch_coverage=1 00:05:24.425 --rc genhtml_function_coverage=1 00:05:24.425 --rc genhtml_legend=1 00:05:24.425 --rc geninfo_all_blocks=1 00:05:24.425 --rc geninfo_unexecuted_blocks=1 00:05:24.425 00:05:24.425 ' 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:24.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.425 --rc genhtml_branch_coverage=1 00:05:24.425 --rc genhtml_function_coverage=1 00:05:24.425 --rc genhtml_legend=1 00:05:24.425 --rc geninfo_all_blocks=1 00:05:24.425 --rc geninfo_unexecuted_blocks=1 00:05:24.425 00:05:24.425 ' 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.425 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:24.426 04:16:37 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:32.563 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:32.563 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.563 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:32.563 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:32.564 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:32.564 04:16:44 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:32.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:32.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.663 ms 00:05:32.564 00:05:32.564 --- 10.0.0.2 ping statistics --- 00:05:32.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:32.564 rtt min/avg/max/mdev = 0.663/0.663/0.663/0.000 ms 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:32.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:32.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.292 ms 00:05:32.564 00:05:32.564 --- 10.0.0.1 ping statistics --- 00:05:32.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:32.564 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2773249 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2773249 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2773249 ']' 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:32.564 04:16:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.564 [2024-11-05 04:16:45.346034] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:32.564 [2024-11-05 04:16:45.346100] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:32.564 [2024-11-05 04:16:45.446765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:32.564 [2024-11-05 04:16:45.501012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:32.564 [2024-11-05 04:16:45.501067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:32.564 [2024-11-05 04:16:45.501075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:32.564 [2024-11-05 04:16:45.501083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:32.564 [2024-11-05 04:16:45.501089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:32.564 [2024-11-05 04:16:45.502900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.564 [2024-11-05 04:16:45.503267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.564 [2024-11-05 04:16:45.503268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.564 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.564 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:05:32.564 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:32.564 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:32.564 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.564 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:32.564 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:32.564 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.564 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 [2024-11-05 04:16:46.203131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 Malloc0 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 Delay0 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 [2024-11-05 04:16:46.282875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.825 04:16:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:32.825 [2024-11-05 04:16:46.414198] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:35.375 Initializing NVMe Controllers 00:05:35.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:35.375 controller IO queue size 128 less than required 00:05:35.375 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:35.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:35.375 Initialization complete. Launching workers. 00:05:35.375 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28929 00:05:35.375 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28990, failed to submit 62 00:05:35.375 success 28933, unsuccessful 57, failed 0 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:05:35.375 rmmod nvme_tcp 00:05:35.375 rmmod nvme_fabrics 00:05:35.375 rmmod nvme_keyring 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2773249 ']' 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2773249 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2773249 ']' 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2773249 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2773249 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2773249' 00:05:35.375 killing process with pid 2773249 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2773249 00:05:35.375 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2773249 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:35.376 04:16:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:37.289 04:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:05:37.289 00:05:37.289 real 0m13.161s 00:05:37.289 user 0m13.982s 00:05:37.289 sys 0m6.345s 00:05:37.289 04:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.289 04:16:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:37.289 ************************************ 00:05:37.289 END TEST nvmf_abort 00:05:37.289 ************************************ 00:05:37.550 04:16:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:37.551 04:16:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:37.551 04:16:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.551 04:16:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:37.551 ************************************ 00:05:37.551 START TEST nvmf_ns_hotplug_stress 00:05:37.551 ************************************ 00:05:37.551 04:16:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:37.551 * Looking for test storage... 00:05:37.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:37.551 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.812 --rc genhtml_branch_coverage=1 00:05:37.812 --rc genhtml_function_coverage=1 00:05:37.812 --rc genhtml_legend=1 00:05:37.812 --rc geninfo_all_blocks=1 00:05:37.812 --rc geninfo_unexecuted_blocks=1 00:05:37.812 00:05:37.812 ' 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.812 --rc genhtml_branch_coverage=1 00:05:37.812 --rc genhtml_function_coverage=1 00:05:37.812 --rc genhtml_legend=1 00:05:37.812 --rc geninfo_all_blocks=1 00:05:37.812 --rc geninfo_unexecuted_blocks=1 00:05:37.812 00:05:37.812 ' 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.812 --rc genhtml_branch_coverage=1 00:05:37.812 --rc genhtml_function_coverage=1 00:05:37.812 --rc genhtml_legend=1 00:05:37.812 --rc geninfo_all_blocks=1 00:05:37.812 --rc geninfo_unexecuted_blocks=1 00:05:37.812 00:05:37.812 ' 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:37.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.812 --rc genhtml_branch_coverage=1 00:05:37.812 --rc genhtml_function_coverage=1 00:05:37.812 --rc genhtml_legend=1 00:05:37.812 --rc geninfo_all_blocks=1 00:05:37.812 --rc geninfo_unexecuted_blocks=1 00:05:37.812 00:05:37.812 ' 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:37.812 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:37.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:05:37.813 04:16:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:05:45.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:05:45.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:05:45.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:05:45.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:45.952 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:05:45.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:45.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.686 ms 00:05:45.953 00:05:45.953 --- 10.0.0.2 ping statistics --- 00:05:45.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.953 rtt min/avg/max/mdev = 0.686/0.686/0.686/0.000 ms 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:45.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:45.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:05:45.953 00:05:45.953 --- 10.0.0.1 ping statistics --- 00:05:45.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:45.953 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2778288 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2778288 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2778288 ']' 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:45.953 04:16:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:45.953 [2024-11-05 04:16:58.598375] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:05:45.953 [2024-11-05 04:16:58.598442] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:45.953 [2024-11-05 04:16:58.695219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:45.953 [2024-11-05 04:16:58.745397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:45.953 [2024-11-05 04:16:58.745447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:45.953 [2024-11-05 04:16:58.745456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:45.953 [2024-11-05 04:16:58.745464] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:45.953 [2024-11-05 04:16:58.745470] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:45.953 [2024-11-05 04:16:58.747497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.953 [2024-11-05 04:16:58.747664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.953 [2024-11-05 04:16:58.747664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:45.953 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.953 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:05:45.953 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:05:45.953 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:45.953 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:45.953 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:45.953 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:45.953 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:46.214 [2024-11-05 04:16:59.602851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.214 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:46.214 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:46.474 [2024-11-05 04:16:59.968416] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:46.474 04:16:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:46.734 04:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:46.734 Malloc0 00:05:46.994 04:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:46.994 Delay0 00:05:46.994 04:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.255 04:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:47.517 NULL1 00:05:47.517 04:17:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:47.517 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2778686 00:05:47.517 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:47.517 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:47.517 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.777 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.038 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:48.038 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:48.038 true 00:05:48.038 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:48.038 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.335 04:17:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.633 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:48.633 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:48.633 true 00:05:48.633 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:48.633 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:48.923 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.185 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:49.185 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:49.185 true 00:05:49.185 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:49.185 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.446 04:17:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.707 04:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:49.707 04:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:49.707 true 00:05:49.967 04:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:49.967 04:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.967 04:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.227 04:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:50.227 04:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:50.488 true 00:05:50.488 04:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:50.488 04:17:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.488 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.748 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:50.748 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:51.009 true 00:05:51.009 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:51.010 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.010 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.270 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:51.270 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:51.531 true 00:05:51.531 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:51.531 04:17:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.531 04:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.791 04:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:51.791 04:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:52.052 true 00:05:52.052 04:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:52.052 04:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.052 04:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.314 04:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:52.314 04:17:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:52.575 true 00:05:52.575 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:52.575 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.835 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.835 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:52.835 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:53.096 true 00:05:53.096 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:53.096 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.357 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.357 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:53.357 04:17:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:53.617 true 00:05:53.617 04:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:53.617 04:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.878 04:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.878 04:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:53.878 04:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:54.139 true 00:05:54.139 04:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:54.139 04:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.400 04:17:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.662 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:54.662 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:54.662 true 00:05:54.662 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:54.662 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.923 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.184 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:55.184 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:55.184 true 00:05:55.184 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:55.184 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.446 04:17:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.706 04:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:55.706 04:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:55.706 true 00:05:55.706 04:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:55.706 04:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.967 04:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.228 04:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:56.228 04:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:56.228 true 00:05:56.228 04:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:56.228 04:17:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.488 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.749 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:56.749 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:56.749 true 00:05:56.749 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:56.749 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.011 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.272 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:57.272 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:57.272 true 00:05:57.533 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:57.533 04:17:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:57.533 04:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.794 04:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:57.794 04:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:58.055 true 00:05:58.055 04:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:58.055 04:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.055 04:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.316 04:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:58.316 04:17:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:58.577 true 00:05:58.577 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:58.577 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.577 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.837 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:58.837 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:59.098 true 00:05:59.098 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:59.098 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.360 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.360 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:59.360 04:17:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:59.621 true 00:05:59.621 04:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:05:59.621 04:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.882 04:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.882 04:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:59.882 04:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:00.143 true 00:06:00.143 04:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:00.143 04:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.404 04:17:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.404 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:00.404 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:00.664 true 00:06:00.664 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:00.664 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.924 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.185 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:01.185 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:01.185 true 00:06:01.185 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:01.185 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.445 04:17:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.705 04:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:01.705 04:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:01.705 true 00:06:01.705 04:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:01.705 04:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.966 04:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.226 04:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:02.226 04:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:02.226 true 00:06:02.226 04:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:02.226 04:17:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.486 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.746 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:02.746 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:02.746 true 00:06:03.007 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:03.007 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.007 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.268 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:03.268 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:03.529 true 00:06:03.529 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:03.529 04:17:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.529 04:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.789 04:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:03.789 04:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:04.049 true 00:06:04.049 04:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:04.049 04:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.309 04:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.309 04:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:04.309 04:17:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:04.571 true 00:06:04.571 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:04.571 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.832 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.832 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:04.832 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:05.092 true 00:06:05.092 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:05.092 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.353 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.353 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:05.353 04:17:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:05.613 true 00:06:05.613 04:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:05.613 04:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.874 04:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.136 04:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:06.136 04:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:06.136 true 00:06:06.136 04:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:06.136 04:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.397 04:17:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.658 04:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:06.658 04:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:06.658 true 00:06:06.658 04:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:06.658 04:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.918 04:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.179 04:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:07.179 04:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:07.179 true 00:06:07.439 04:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:07.439 04:17:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.439 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.699 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:07.699 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:07.960 true 00:06:07.960 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:07.960 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.960 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.221 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:08.221 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:08.481 true 00:06:08.481 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:08.481 04:17:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.742 04:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.742 04:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:08.742 04:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:09.002 true 00:06:09.002 04:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:09.002 04:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.262 04:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.262 04:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:09.262 04:17:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:09.523 true 00:06:09.523 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:09.523 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.783 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.044 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:10.044 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:10.044 true 00:06:10.044 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:10.044 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.304 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.565 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:10.565 04:17:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:10.565 true 00:06:10.565 04:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:10.565 04:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.826 04:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.087 04:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:11.087 04:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:11.087 true 00:06:11.087 04:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:11.087 04:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.348 04:17:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.610 04:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:11.610 04:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:11.610 true 00:06:11.872 04:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:11.872 04:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.872 04:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.131 04:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:12.131 04:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:12.392 true 00:06:12.392 04:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:12.392 04:17:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.392 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.653 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:12.653 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:12.914 true 00:06:12.914 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:12.914 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.914 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.174 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:13.174 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:13.435 true 00:06:13.435 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:13.435 04:17:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.695 04:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.695 04:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:13.695 04:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:13.954 true 00:06:13.954 04:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:13.954 04:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.214 04:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.214 04:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:06:14.214 04:17:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:06:14.475 true 00:06:14.475 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:14.475 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.736 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.996 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:06:14.996 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:06:14.996 true 00:06:14.996 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:14.996 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.257 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.517 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:06:15.517 04:17:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:06:15.517 true 00:06:15.517 04:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:15.517 04:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.778 04:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.038 04:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:06:16.038 04:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:06:16.038 true 00:06:16.298 04:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:16.298 04:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.298 04:17:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.558 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:06:16.558 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:06:16.819 true 00:06:16.819 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:16.819 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.819 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.079 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:06:17.079 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:06:17.340 true 00:06:17.340 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:17.340 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.600 04:17:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.600 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:06:17.600 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:06:17.863 true 00:06:17.864 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:17.864 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.864 Initializing NVMe Controllers 00:06:17.864 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:17.864 Controller IO queue size 128, less than required. 00:06:17.864 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:17.864 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:17.864 Initialization complete. Launching workers. 00:06:17.864 ======================================================== 00:06:17.864 Latency(us) 00:06:17.864 Device Information : IOPS MiB/s Average min max 00:06:17.864 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30258.66 14.77 4230.04 1602.47 8613.36 00:06:17.864 ======================================================== 00:06:17.864 Total : 30258.66 14.77 4230.04 1602.47 8613.36 00:06:17.864 00:06:18.127 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.127 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:06:18.127 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:06:18.387 true 00:06:18.387 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2778686 00:06:18.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2778686) - No such process 00:06:18.387 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2778686 00:06:18.387 04:17:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.648 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.648 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:18.648 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:18.648 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:18.648 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.648 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:18.908 null0 00:06:18.908 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:18.908 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:18.908 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:19.168 null1 00:06:19.168 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.168 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.168 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:19.168 null2 00:06:19.168 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.168 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.168 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:19.427 null3 00:06:19.427 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.427 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.427 04:17:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:19.686 null4 00:06:19.686 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.686 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.686 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:19.686 null5 00:06:19.686 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.686 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.686 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:19.945 null6 00:06:19.945 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:19.945 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:19.945 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:20.205 null7 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.205 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2785469 2785471 2785474 2785477 2785480 2785483 2785486 2785489 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.206 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.467 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.467 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.467 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.467 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.467 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.467 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.467 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.467 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.467 04:17:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.467 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.728 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.989 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.990 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.990 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.990 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.990 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.990 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.990 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.990 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.990 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.990 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.251 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.512 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.512 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.512 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.512 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.512 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.512 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.512 04:17:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.512 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.773 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.034 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.296 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.557 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.557 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.557 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.557 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.557 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.557 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.557 04:17:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.557 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.558 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.818 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.079 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:23.338 04:17:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.599 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.859 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:24.127 rmmod nvme_tcp 00:06:24.127 rmmod nvme_fabrics 00:06:24.127 rmmod nvme_keyring 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2778288 ']' 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2778288 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2778288 ']' 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2778288 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2778288 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2778288' 00:06:24.127 killing process with pid 2778288 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2778288 00:06:24.127 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2778288 00:06:24.437 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.437 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.437 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.437 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:24.437 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.437 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:24.437 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.438 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.438 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:24.438 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.438 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.438 04:17:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.384 04:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:26.384 00:06:26.384 real 0m48.916s 00:06:26.384 user 3m20.466s 00:06:26.384 sys 0m16.721s 00:06:26.384 04:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.384 04:17:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:26.384 ************************************ 00:06:26.384 END TEST nvmf_ns_hotplug_stress 00:06:26.384 ************************************ 00:06:26.384 04:17:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:26.384 04:17:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:26.384 04:17:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.384 04:17:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.384 ************************************ 00:06:26.384 START TEST nvmf_delete_subsystem 00:06:26.384 ************************************ 00:06:26.384 04:17:39 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:26.645 * Looking for test storage... 00:06:26.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.645 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:26.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.646 --rc genhtml_branch_coverage=1 00:06:26.646 --rc genhtml_function_coverage=1 00:06:26.646 --rc genhtml_legend=1 00:06:26.646 --rc geninfo_all_blocks=1 00:06:26.646 --rc geninfo_unexecuted_blocks=1 00:06:26.646 00:06:26.646 ' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:26.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.646 --rc genhtml_branch_coverage=1 00:06:26.646 --rc genhtml_function_coverage=1 00:06:26.646 --rc genhtml_legend=1 00:06:26.646 --rc geninfo_all_blocks=1 00:06:26.646 --rc geninfo_unexecuted_blocks=1 00:06:26.646 00:06:26.646 ' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:26.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.646 --rc genhtml_branch_coverage=1 00:06:26.646 --rc genhtml_function_coverage=1 00:06:26.646 --rc genhtml_legend=1 00:06:26.646 --rc geninfo_all_blocks=1 00:06:26.646 --rc geninfo_unexecuted_blocks=1 00:06:26.646 00:06:26.646 ' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:26.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.646 --rc genhtml_branch_coverage=1 00:06:26.646 --rc genhtml_function_coverage=1 00:06:26.646 --rc genhtml_legend=1 00:06:26.646 --rc geninfo_all_blocks=1 00:06:26.646 --rc geninfo_unexecuted_blocks=1 00:06:26.646 00:06:26.646 ' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:26.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:26.646 04:17:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:34.799 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:34.799 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:34.799 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:34.799 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:34.800 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:34.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:34.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.671 ms 00:06:34.800 00:06:34.800 --- 10.0.0.2 ping statistics --- 00:06:34.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.800 rtt min/avg/max/mdev = 0.671/0.671/0.671/0.000 ms 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:34.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:34.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:06:34.800 00:06:34.800 --- 10.0.0.1 ping statistics --- 00:06:34.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:34.800 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2790643 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2790643 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2790643 ']' 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.800 04:17:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.800 [2024-11-05 04:17:47.470028] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:34.800 [2024-11-05 04:17:47.470098] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:34.800 [2024-11-05 04:17:47.551441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.800 [2024-11-05 04:17:47.592618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:34.800 [2024-11-05 04:17:47.592653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:34.800 [2024-11-05 04:17:47.592662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:34.800 [2024-11-05 04:17:47.592669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:34.800 [2024-11-05 04:17:47.592678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:34.800 [2024-11-05 04:17:47.593862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.800 [2024-11-05 04:17:47.594005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.800 [2024-11-05 04:17:48.308040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.800 [2024-11-05 04:17:48.332215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.800 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.801 NULL1 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.801 Delay0 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2790767 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:34.801 04:17:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:35.061 [2024-11-05 04:17:48.439091] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:36.974 04:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:36.974 04:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:36.974 04:17:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:37.234 Write completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 starting I/O failed: -6 00:06:37.234 Write completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Write completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 starting I/O failed: -6 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 starting I/O failed: -6 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 starting I/O failed: -6 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Write completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 Read completed with error (sct=0, sc=8) 00:06:37.234 starting I/O failed: -6 00:06:37.234 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 [2024-11-05 04:17:50.685595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48680 is same with the state(6) to be set 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 starting I/O failed: -6 00:06:37.235 [2024-11-05 04:17:50.687651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd53c000c00 is same with the state(6) to be set 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Write completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:37.235 Read completed with error (sct=0, sc=8) 00:06:38.177 [2024-11-05 04:17:51.661801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb499a0 is same with the state(6) to be set 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 [2024-11-05 04:17:51.689111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb484a0 is same with the state(6) to be set 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 [2024-11-05 04:17:51.690272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb48860 is same with the state(6) to be set 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 [2024-11-05 04:17:51.690541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd53c00cfe0 is same with the state(6) to be set 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Write completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 Read completed with error (sct=0, sc=8) 00:06:38.177 [2024-11-05 04:17:51.690660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fd53c00d780 is same with the state(6) to be set 00:06:38.177 Initializing NVMe Controllers 00:06:38.177 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:38.177 Controller IO queue size 128, less than required. 00:06:38.177 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:38.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:38.177 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:38.177 Initialization complete. Launching workers. 00:06:38.177 ======================================================== 00:06:38.177 Latency(us) 00:06:38.177 Device Information : IOPS MiB/s Average min max 00:06:38.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.16 0.09 880057.57 261.25 1008468.83 00:06:38.177 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.77 0.08 1001175.62 300.15 2002438.63 00:06:38.177 ======================================================== 00:06:38.177 Total : 331.93 0.16 936530.90 261.25 2002438.63 00:06:38.177 00:06:38.177 [2024-11-05 04:17:51.691246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb499a0 (9): Bad file descriptor 00:06:38.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:38.177 04:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.177 04:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:38.177 04:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2790767 00:06:38.177 04:17:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2790767 00:06:38.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2790767) - No such process 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2790767 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2790767 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2790767 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.749 [2024-11-05 04:17:52.221065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2791540 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2791540 00:06:38.749 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:38.749 [2024-11-05 04:17:52.300877] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:39.321 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.321 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2791540 00:06:39.321 04:17:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:39.892 04:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:39.892 04:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2791540 00:06:39.892 04:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:40.154 04:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:40.154 04:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2791540 00:06:40.154 04:17:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:40.727 04:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:40.727 04:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2791540 00:06:40.727 04:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:41.299 04:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:41.299 04:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2791540 00:06:41.299 04:17:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:41.871 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:41.871 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2791540 00:06:41.871 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:42.132 Initializing NVMe Controllers 00:06:42.132 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:42.132 Controller IO queue size 128, less than required. 00:06:42.132 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:42.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:42.132 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:42.132 Initialization complete. Launching workers. 00:06:42.132 ======================================================== 00:06:42.132 Latency(us) 00:06:42.132 Device Information : IOPS MiB/s Average min max 00:06:42.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002542.55 1000191.15 1041766.61 00:06:42.132 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003131.16 1000201.32 1041876.89 00:06:42.132 ======================================================== 00:06:42.132 Total : 256.00 0.12 1002836.86 1000191.15 1041876.89 00:06:42.132 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2791540 00:06:42.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2791540) - No such process 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2791540 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:42.393 rmmod nvme_tcp 00:06:42.393 rmmod nvme_fabrics 00:06:42.393 rmmod nvme_keyring 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2790643 ']' 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2790643 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2790643 ']' 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2790643 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2790643 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2790643' 00:06:42.393 killing process with pid 2790643 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2790643 00:06:42.393 04:17:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2790643 00:06:42.393 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:42.393 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:42.654 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:42.654 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:06:42.655 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:06:42.655 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:42.655 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:06:42.655 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:42.655 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:42.655 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:42.655 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:42.655 04:17:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.570 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:44.570 00:06:44.570 real 0m18.131s 00:06:44.570 user 0m31.068s 00:06:44.570 sys 0m6.610s 00:06:44.570 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.570 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.570 ************************************ 00:06:44.570 END TEST nvmf_delete_subsystem 00:06:44.570 ************************************ 00:06:44.570 04:17:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:44.570 04:17:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:44.570 04:17:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.570 04:17:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:44.570 ************************************ 00:06:44.570 START TEST nvmf_host_management 00:06:44.570 ************************************ 00:06:44.570 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:44.832 * Looking for test storage... 00:06:44.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.832 --rc genhtml_branch_coverage=1 00:06:44.832 --rc genhtml_function_coverage=1 00:06:44.832 --rc genhtml_legend=1 00:06:44.832 --rc geninfo_all_blocks=1 00:06:44.832 --rc geninfo_unexecuted_blocks=1 00:06:44.832 00:06:44.832 ' 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.832 --rc genhtml_branch_coverage=1 00:06:44.832 --rc genhtml_function_coverage=1 00:06:44.832 --rc genhtml_legend=1 00:06:44.832 --rc geninfo_all_blocks=1 00:06:44.832 --rc geninfo_unexecuted_blocks=1 00:06:44.832 00:06:44.832 ' 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.832 --rc genhtml_branch_coverage=1 00:06:44.832 --rc genhtml_function_coverage=1 00:06:44.832 --rc genhtml_legend=1 00:06:44.832 --rc geninfo_all_blocks=1 00:06:44.832 --rc geninfo_unexecuted_blocks=1 00:06:44.832 00:06:44.832 ' 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.832 --rc genhtml_branch_coverage=1 00:06:44.832 --rc genhtml_function_coverage=1 00:06:44.832 --rc genhtml_legend=1 00:06:44.832 --rc geninfo_all_blocks=1 00:06:44.832 --rc geninfo_unexecuted_blocks=1 00:06:44.832 00:06:44.832 ' 00:06:44.832 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:06:44.833 04:17:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:06:52.982 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:06:52.982 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:06:52.982 Found net devices under 0000:4b:00.0: cvl_0_0 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:06:52.982 Found net devices under 0000:4b:00.1: cvl_0_1 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:52.982 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:52.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:52.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.628 ms 00:06:52.983 00:06:52.983 --- 10.0.0.2 ping statistics --- 00:06:52.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.983 rtt min/avg/max/mdev = 0.628/0.628/0.628/0.000 ms 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:52.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:52.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:06:52.983 00:06:52.983 --- 10.0.0.1 ping statistics --- 00:06:52.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:52.983 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2796578 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2796578 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2796578 ']' 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.983 04:18:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:52.983 [2024-11-05 04:18:05.944768] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:52.983 [2024-11-05 04:18:05.944838] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:52.983 [2024-11-05 04:18:06.042650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.983 [2024-11-05 04:18:06.095693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:52.983 [2024-11-05 04:18:06.095757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:52.983 [2024-11-05 04:18:06.095767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:52.983 [2024-11-05 04:18:06.095774] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:52.983 [2024-11-05 04:18:06.095781] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:52.983 [2024-11-05 04:18:06.097795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.983 [2024-11-05 04:18:06.098025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.983 [2024-11-05 04:18:06.098197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.983 [2024-11-05 04:18:06.098197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.244 [2024-11-05 04:18:06.801966] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:53.244 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.245 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:53.245 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:53.245 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:53.245 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:53.245 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.245 Malloc0 00:06:53.245 [2024-11-05 04:18:06.872971] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2796941 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2796941 /var/tmp/bdevperf.sock 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2796941 ']' 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:53.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:53.505 { 00:06:53.505 "params": { 00:06:53.505 "name": "Nvme$subsystem", 00:06:53.505 "trtype": "$TEST_TRANSPORT", 00:06:53.505 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:53.505 "adrfam": "ipv4", 00:06:53.505 "trsvcid": "$NVMF_PORT", 00:06:53.505 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:53.505 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:53.505 "hdgst": ${hdgst:-false}, 00:06:53.505 "ddgst": ${ddgst:-false} 00:06:53.505 }, 00:06:53.505 "method": "bdev_nvme_attach_controller" 00:06:53.505 } 00:06:53.505 EOF 00:06:53.505 )") 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:53.505 04:18:06 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:53.505 "params": { 00:06:53.505 "name": "Nvme0", 00:06:53.505 "trtype": "tcp", 00:06:53.505 "traddr": "10.0.0.2", 00:06:53.505 "adrfam": "ipv4", 00:06:53.505 "trsvcid": "4420", 00:06:53.505 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:53.505 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:53.505 "hdgst": false, 00:06:53.505 "ddgst": false 00:06:53.505 }, 00:06:53.505 "method": "bdev_nvme_attach_controller" 00:06:53.505 }' 00:06:53.505 [2024-11-05 04:18:06.975209] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:53.505 [2024-11-05 04:18:06.975261] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796941 ] 00:06:53.505 [2024-11-05 04:18:07.046134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.505 [2024-11-05 04:18:07.082313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.766 Running I/O for 10 seconds... 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.339 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.339 [2024-11-05 04:18:07.860021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.339 [2024-11-05 04:18:07.860322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.340 [2024-11-05 04:18:07.860329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.340 [2024-11-05 04:18:07.860335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.340 [2024-11-05 04:18:07.860342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.340 [2024-11-05 04:18:07.860348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.340 [2024-11-05 04:18:07.860354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.340 [2024-11-05 04:18:07.860361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.340 [2024-11-05 04:18:07.860367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.340 [2024-11-05 04:18:07.860374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x155a130 is same with the state(6) to be set 00:06:54.340 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.340 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:54.340 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:54.340 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.340 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:54.340 04:18:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:54.340 [2024-11-05 04:18:07.878434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:54.340 [2024-11-05 04:18:07.878471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:54.340 [2024-11-05 04:18:07.878490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:54.340 [2024-11-05 04:18:07.878505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:54.340 [2024-11-05 04:18:07.878527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8a8000 is same with the state(6) to be set 00:06:54.340 [2024-11-05 04:18:07.878606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.878988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.878997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.879005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.879015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.879022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.879033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.879040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.879050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.879057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.879067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.340 [2024-11-05 04:18:07.879076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.340 [2024-11-05 04:18:07.879087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.341 [2024-11-05 04:18:07.879649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.341 [2024-11-05 04:18:07.879658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.342 [2024-11-05 04:18:07.879665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.342 [2024-11-05 04:18:07.879674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.342 [2024-11-05 04:18:07.879683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.342 [2024-11-05 04:18:07.879693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.342 [2024-11-05 04:18:07.879700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.342 [2024-11-05 04:18:07.879710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:54.342 [2024-11-05 04:18:07.879718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:54.342 [2024-11-05 04:18:07.880965] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:54.342 task offset: 130688 on job bdev=Nvme0n1 fails 00:06:54.342 00:06:54.342 Latency(us) 00:06:54.342 [2024-11-05T03:18:07.982Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.342 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:54.342 Job: Nvme0n1 ended in about 0.65 seconds with error 00:06:54.342 Verification LBA range: start 0x0 length 0x400 00:06:54.342 Nvme0n1 : 0.65 1570.24 98.14 98.43 0.00 37494.01 1665.71 33204.91 00:06:54.342 [2024-11-05T03:18:07.982Z] =================================================================================================================== 00:06:54.342 [2024-11-05T03:18:07.982Z] Total : 1570.24 98.14 98.43 0.00 37494.01 1665.71 33204.91 00:06:54.342 [2024-11-05 04:18:07.882946] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:54.342 [2024-11-05 04:18:07.882968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8a8000 (9): Bad file descriptor 00:06:54.342 [2024-11-05 04:18:07.893270] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:55.285 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2796941 00:06:55.285 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2796941) - No such process 00:06:55.285 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:55.285 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:55.285 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:55.285 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:55.285 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:55.285 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:55.286 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:55.286 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:55.286 { 00:06:55.286 "params": { 00:06:55.286 "name": "Nvme$subsystem", 00:06:55.286 "trtype": "$TEST_TRANSPORT", 00:06:55.286 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:55.286 "adrfam": "ipv4", 00:06:55.286 "trsvcid": "$NVMF_PORT", 00:06:55.286 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:55.286 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:55.286 "hdgst": ${hdgst:-false}, 00:06:55.286 "ddgst": ${ddgst:-false} 00:06:55.286 }, 00:06:55.286 "method": "bdev_nvme_attach_controller" 00:06:55.286 } 00:06:55.286 EOF 00:06:55.286 )") 00:06:55.286 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:55.286 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:55.286 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:55.286 04:18:08 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:55.286 "params": { 00:06:55.286 "name": "Nvme0", 00:06:55.286 "trtype": "tcp", 00:06:55.286 "traddr": "10.0.0.2", 00:06:55.286 "adrfam": "ipv4", 00:06:55.286 "trsvcid": "4420", 00:06:55.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:55.286 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:55.286 "hdgst": false, 00:06:55.286 "ddgst": false 00:06:55.286 }, 00:06:55.286 "method": "bdev_nvme_attach_controller" 00:06:55.286 }' 00:06:55.547 [2024-11-05 04:18:08.946783] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:06:55.547 [2024-11-05 04:18:08.946838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2797300 ] 00:06:55.547 [2024-11-05 04:18:09.017835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.547 [2024-11-05 04:18:09.056519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.809 Running I/O for 1 seconds... 00:06:56.750 1536.00 IOPS, 96.00 MiB/s 00:06:56.750 Latency(us) 00:06:56.750 [2024-11-05T03:18:10.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.750 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:56.750 Verification LBA range: start 0x0 length 0x400 00:06:56.750 Nvme0n1 : 1.04 1540.97 96.31 0.00 0.00 40824.68 8792.75 33204.91 00:06:56.750 [2024-11-05T03:18:10.390Z] =================================================================================================================== 00:06:56.750 [2024-11-05T03:18:10.390Z] Total : 1540.97 96.31 0.00 0.00 40824.68 8792.75 33204.91 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:57.010 rmmod nvme_tcp 00:06:57.010 rmmod nvme_fabrics 00:06:57.010 rmmod nvme_keyring 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2796578 ']' 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2796578 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2796578 ']' 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2796578 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2796578 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2796578' 00:06:57.010 killing process with pid 2796578 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2796578 00:06:57.010 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2796578 00:06:57.271 [2024-11-05 04:18:10.716919] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:57.271 04:18:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.184 04:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:59.445 04:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:59.445 00:06:59.445 real 0m14.633s 00:06:59.445 user 0m23.304s 00:06:59.445 sys 0m6.699s 00:06:59.445 04:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.445 04:18:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:59.445 ************************************ 00:06:59.445 END TEST nvmf_host_management 00:06:59.445 ************************************ 00:06:59.445 04:18:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:59.445 04:18:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:59.445 04:18:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.445 04:18:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:59.445 ************************************ 00:06:59.445 START TEST nvmf_lvol 00:06:59.445 ************************************ 00:06:59.445 04:18:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:59.445 * Looking for test storage... 00:06:59.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:59.445 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:59.706 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:59.706 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:59.706 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:59.706 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:59.706 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:59.706 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:59.706 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:59.706 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:59.706 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:59.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.707 --rc genhtml_branch_coverage=1 00:06:59.707 --rc genhtml_function_coverage=1 00:06:59.707 --rc genhtml_legend=1 00:06:59.707 --rc geninfo_all_blocks=1 00:06:59.707 --rc geninfo_unexecuted_blocks=1 00:06:59.707 00:06:59.707 ' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:59.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.707 --rc genhtml_branch_coverage=1 00:06:59.707 --rc genhtml_function_coverage=1 00:06:59.707 --rc genhtml_legend=1 00:06:59.707 --rc geninfo_all_blocks=1 00:06:59.707 --rc geninfo_unexecuted_blocks=1 00:06:59.707 00:06:59.707 ' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:59.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.707 --rc genhtml_branch_coverage=1 00:06:59.707 --rc genhtml_function_coverage=1 00:06:59.707 --rc genhtml_legend=1 00:06:59.707 --rc geninfo_all_blocks=1 00:06:59.707 --rc geninfo_unexecuted_blocks=1 00:06:59.707 00:06:59.707 ' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:59.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:59.707 --rc genhtml_branch_coverage=1 00:06:59.707 --rc genhtml_function_coverage=1 00:06:59.707 --rc genhtml_legend=1 00:06:59.707 --rc geninfo_all_blocks=1 00:06:59.707 --rc geninfo_unexecuted_blocks=1 00:06:59.707 00:06:59.707 ' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:59.707 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:06:59.707 04:18:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:07.856 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:07.856 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:07.856 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:07.856 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:07.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:07:07.856 00:07:07.856 --- 10.0.0.2 ping statistics --- 00:07:07.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.856 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:07:07.856 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.856 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.856 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:07:07.856 00:07:07.856 --- 10.0.0.1 ping statistics --- 00:07:07.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.857 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2802433 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2802433 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2802433 ']' 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:07.857 [2024-11-05 04:18:20.501327] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:07.857 [2024-11-05 04:18:20.501371] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.857 [2024-11-05 04:18:20.570191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.857 [2024-11-05 04:18:20.605794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.857 [2024-11-05 04:18:20.605825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.857 [2024-11-05 04:18:20.605833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.857 [2024-11-05 04:18:20.605840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.857 [2024-11-05 04:18:20.605846] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.857 [2024-11-05 04:18:20.607148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.857 [2024-11-05 04:18:20.607263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.857 [2024-11-05 04:18:20.607265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:07.857 [2024-11-05 04:18:20.886950] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.857 04:18:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:07.857 04:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:07.857 04:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:07.857 04:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:07.857 04:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:08.118 04:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:08.118 04:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6c6673ae-4689-4497-bbc7-0fc9cbe974d5 00:07:08.118 04:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c6673ae-4689-4497-bbc7-0fc9cbe974d5 lvol 20 00:07:08.380 04:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=16caaf1d-eb91-475b-a9b3-814c123c4c18 00:07:08.380 04:18:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:08.640 04:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 16caaf1d-eb91-475b-a9b3-814c123c4c18 00:07:08.641 04:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:08.902 [2024-11-05 04:18:22.402684] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.902 04:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:09.163 04:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2802813 00:07:09.163 04:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:09.163 04:18:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:10.106 04:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 16caaf1d-eb91-475b-a9b3-814c123c4c18 MY_SNAPSHOT 00:07:10.368 04:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=00bd14f8-aea3-4657-8d91-929676e15253 00:07:10.368 04:18:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 16caaf1d-eb91-475b-a9b3-814c123c4c18 30 00:07:10.627 04:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 00bd14f8-aea3-4657-8d91-929676e15253 MY_CLONE 00:07:10.887 04:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=795be4cb-4c94-455a-ad54-672a560835a5 00:07:10.887 04:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 795be4cb-4c94-455a-ad54-672a560835a5 00:07:11.148 04:18:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2802813 00:07:21.149 Initializing NVMe Controllers 00:07:21.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:21.149 Controller IO queue size 128, less than required. 00:07:21.149 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:21.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:21.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:21.149 Initialization complete. Launching workers. 00:07:21.149 ======================================================== 00:07:21.149 Latency(us) 00:07:21.149 Device Information : IOPS MiB/s Average min max 00:07:21.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12139.30 47.42 10548.78 1530.27 59402.40 00:07:21.150 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16646.00 65.02 7689.31 1375.13 57073.41 00:07:21.150 ======================================================== 00:07:21.150 Total : 28785.30 112.44 8895.20 1375.13 59402.40 00:07:21.150 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 16caaf1d-eb91-475b-a9b3-814c123c4c18 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c6673ae-4689-4497-bbc7-0fc9cbe974d5 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:21.150 rmmod nvme_tcp 00:07:21.150 rmmod nvme_fabrics 00:07:21.150 rmmod nvme_keyring 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2802433 ']' 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2802433 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2802433 ']' 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2802433 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2802433 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2802433' 00:07:21.150 killing process with pid 2802433 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2802433 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2802433 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.150 04:18:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.536 04:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:22.536 00:07:22.536 real 0m23.020s 00:07:22.536 user 1m2.855s 00:07:22.536 sys 0m8.042s 00:07:22.536 04:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:22.536 04:18:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:22.536 ************************************ 00:07:22.536 END TEST nvmf_lvol 00:07:22.536 ************************************ 00:07:22.536 04:18:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:22.536 04:18:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:22.536 04:18:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:22.536 04:18:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:22.536 ************************************ 00:07:22.536 START TEST nvmf_lvs_grow 00:07:22.536 ************************************ 00:07:22.536 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:22.536 * Looking for test storage... 00:07:22.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:22.536 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:22.536 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:22.536 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:22.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.799 --rc genhtml_branch_coverage=1 00:07:22.799 --rc genhtml_function_coverage=1 00:07:22.799 --rc genhtml_legend=1 00:07:22.799 --rc geninfo_all_blocks=1 00:07:22.799 --rc geninfo_unexecuted_blocks=1 00:07:22.799 00:07:22.799 ' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:22.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.799 --rc genhtml_branch_coverage=1 00:07:22.799 --rc genhtml_function_coverage=1 00:07:22.799 --rc genhtml_legend=1 00:07:22.799 --rc geninfo_all_blocks=1 00:07:22.799 --rc geninfo_unexecuted_blocks=1 00:07:22.799 00:07:22.799 ' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:22.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.799 --rc genhtml_branch_coverage=1 00:07:22.799 --rc genhtml_function_coverage=1 00:07:22.799 --rc genhtml_legend=1 00:07:22.799 --rc geninfo_all_blocks=1 00:07:22.799 --rc geninfo_unexecuted_blocks=1 00:07:22.799 00:07:22.799 ' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:22.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:22.799 --rc genhtml_branch_coverage=1 00:07:22.799 --rc genhtml_function_coverage=1 00:07:22.799 --rc genhtml_legend=1 00:07:22.799 --rc geninfo_all_blocks=1 00:07:22.799 --rc geninfo_unexecuted_blocks=1 00:07:22.799 00:07:22.799 ' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:22.799 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:22.799 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:22.800 04:18:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:07:30.942 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:07:30.942 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:07:30.942 Found net devices under 0000:4b:00.0: cvl_0_0 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:07:30.942 Found net devices under 0000:4b:00.1: cvl_0_1 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.942 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:07:30.943 00:07:30.943 --- 10.0.0.2 ping statistics --- 00:07:30.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.943 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:07:30.943 00:07:30.943 --- 10.0.0.1 ping statistics --- 00:07:30.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.943 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2809186 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2809186 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2809186 ']' 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.943 [2024-11-05 04:18:43.653934] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:30.943 [2024-11-05 04:18:43.653984] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.943 [2024-11-05 04:18:43.730726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.943 [2024-11-05 04:18:43.765543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.943 [2024-11-05 04:18:43.765577] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.943 [2024-11-05 04:18:43.765585] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.943 [2024-11-05 04:18:43.765592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.943 [2024-11-05 04:18:43.765598] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.943 [2024-11-05 04:18:43.766157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.943 04:18:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.943 [2024-11-05 04:18:44.041171] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:30.943 ************************************ 00:07:30.943 START TEST lvs_grow_clean 00:07:30.943 ************************************ 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:30.943 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:31.204 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:31.204 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:31.204 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8cd85984-c7d3-49e0-be03-61811cc71a83 lvol 150 00:07:31.204 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=731afe37-38f0-42b1-995c-95d52630e4db 00:07:31.204 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:31.205 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:31.465 [2024-11-05 04:18:44.976927] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:31.465 [2024-11-05 04:18:44.976978] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:31.465 true 00:07:31.465 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:31.465 04:18:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:31.726 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:31.726 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.726 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 731afe37-38f0-42b1-995c-95d52630e4db 00:07:31.986 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:31.986 [2024-11-05 04:18:45.618915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2809762 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2809762 /var/tmp/bdevperf.sock 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2809762 ']' 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.246 04:18:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:32.246 [2024-11-05 04:18:45.847638] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:32.246 [2024-11-05 04:18:45.847691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2809762 ] 00:07:32.507 [2024-11-05 04:18:45.936850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.507 [2024-11-05 04:18:45.972742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.090 04:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:33.090 04:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:33.090 04:18:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:33.428 Nvme0n1 00:07:33.428 04:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:33.750 [ 00:07:33.750 { 00:07:33.750 "name": "Nvme0n1", 00:07:33.750 "aliases": [ 00:07:33.750 "731afe37-38f0-42b1-995c-95d52630e4db" 00:07:33.750 ], 00:07:33.751 "product_name": "NVMe disk", 00:07:33.751 "block_size": 4096, 00:07:33.751 "num_blocks": 38912, 00:07:33.751 "uuid": "731afe37-38f0-42b1-995c-95d52630e4db", 00:07:33.751 "numa_id": 0, 00:07:33.751 "assigned_rate_limits": { 00:07:33.751 "rw_ios_per_sec": 0, 00:07:33.751 "rw_mbytes_per_sec": 0, 00:07:33.751 "r_mbytes_per_sec": 0, 00:07:33.751 "w_mbytes_per_sec": 0 00:07:33.751 }, 00:07:33.751 "claimed": false, 00:07:33.751 "zoned": false, 00:07:33.751 "supported_io_types": { 00:07:33.751 "read": true, 00:07:33.751 "write": true, 00:07:33.751 "unmap": true, 00:07:33.751 "flush": true, 00:07:33.751 "reset": true, 00:07:33.751 "nvme_admin": true, 00:07:33.751 "nvme_io": true, 00:07:33.751 "nvme_io_md": false, 00:07:33.751 "write_zeroes": true, 00:07:33.751 "zcopy": false, 00:07:33.751 "get_zone_info": false, 00:07:33.751 "zone_management": false, 00:07:33.751 "zone_append": false, 00:07:33.751 "compare": true, 00:07:33.751 "compare_and_write": true, 00:07:33.751 "abort": true, 00:07:33.751 "seek_hole": false, 00:07:33.751 "seek_data": false, 00:07:33.751 "copy": true, 00:07:33.751 "nvme_iov_md": false 00:07:33.751 }, 00:07:33.751 "memory_domains": [ 00:07:33.751 { 00:07:33.751 "dma_device_id": "system", 00:07:33.751 "dma_device_type": 1 00:07:33.751 } 00:07:33.751 ], 00:07:33.751 "driver_specific": { 00:07:33.751 "nvme": [ 00:07:33.751 { 00:07:33.751 "trid": { 00:07:33.751 "trtype": "TCP", 00:07:33.751 "adrfam": "IPv4", 00:07:33.751 "traddr": "10.0.0.2", 00:07:33.751 "trsvcid": "4420", 00:07:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:33.751 }, 00:07:33.751 "ctrlr_data": { 00:07:33.751 "cntlid": 1, 00:07:33.751 "vendor_id": "0x8086", 00:07:33.751 "model_number": "SPDK bdev Controller", 00:07:33.751 "serial_number": "SPDK0", 00:07:33.751 "firmware_revision": "25.01", 00:07:33.751 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:33.751 "oacs": { 00:07:33.751 "security": 0, 00:07:33.751 "format": 0, 00:07:33.751 "firmware": 0, 00:07:33.751 "ns_manage": 0 00:07:33.751 }, 00:07:33.751 "multi_ctrlr": true, 00:07:33.751 "ana_reporting": false 00:07:33.751 }, 00:07:33.751 "vs": { 00:07:33.751 "nvme_version": "1.3" 00:07:33.751 }, 00:07:33.751 "ns_data": { 00:07:33.751 "id": 1, 00:07:33.751 "can_share": true 00:07:33.751 } 00:07:33.751 } 00:07:33.751 ], 00:07:33.751 "mp_policy": "active_passive" 00:07:33.751 } 00:07:33.751 } 00:07:33.751 ] 00:07:33.751 04:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2809940 00:07:33.751 04:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:33.751 04:18:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:33.751 Running I/O for 10 seconds... 00:07:34.716 Latency(us) 00:07:34.716 [2024-11-05T03:18:48.356Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.716 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.716 Nvme0n1 : 1.00 17655.00 68.96 0.00 0.00 0.00 0.00 0.00 00:07:34.716 [2024-11-05T03:18:48.356Z] =================================================================================================================== 00:07:34.716 [2024-11-05T03:18:48.356Z] Total : 17655.00 68.96 0.00 0.00 0.00 0.00 0.00 00:07:34.716 00:07:35.669 04:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:35.669 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.669 Nvme0n1 : 2.00 17814.50 69.59 0.00 0.00 0.00 0.00 0.00 00:07:35.669 [2024-11-05T03:18:49.309Z] =================================================================================================================== 00:07:35.669 [2024-11-05T03:18:49.309Z] Total : 17814.50 69.59 0.00 0.00 0.00 0.00 0.00 00:07:35.669 00:07:35.930 true 00:07:35.930 04:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:35.930 04:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:35.930 04:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:35.930 04:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:35.930 04:18:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2809940 00:07:36.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.870 Nvme0n1 : 3.00 17864.00 69.78 0.00 0.00 0.00 0.00 0.00 00:07:36.870 [2024-11-05T03:18:50.510Z] =================================================================================================================== 00:07:36.870 [2024-11-05T03:18:50.510Z] Total : 17864.00 69.78 0.00 0.00 0.00 0.00 0.00 00:07:36.870 00:07:37.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.811 Nvme0n1 : 4.00 17881.50 69.85 0.00 0.00 0.00 0.00 0.00 00:07:37.811 [2024-11-05T03:18:51.451Z] =================================================================================================================== 00:07:37.811 [2024-11-05T03:18:51.451Z] Total : 17881.50 69.85 0.00 0.00 0.00 0.00 0.00 00:07:37.811 00:07:38.753 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.753 Nvme0n1 : 5.00 17917.60 69.99 0.00 0.00 0.00 0.00 0.00 00:07:38.753 [2024-11-05T03:18:52.393Z] =================================================================================================================== 00:07:38.753 [2024-11-05T03:18:52.393Z] Total : 17917.60 69.99 0.00 0.00 0.00 0.00 0.00 00:07:38.753 00:07:39.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.693 Nvme0n1 : 6.00 17945.83 70.10 0.00 0.00 0.00 0.00 0.00 00:07:39.693 [2024-11-05T03:18:53.333Z] =================================================================================================================== 00:07:39.693 [2024-11-05T03:18:53.333Z] Total : 17945.83 70.10 0.00 0.00 0.00 0.00 0.00 00:07:39.693 00:07:41.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.079 Nvme0n1 : 7.00 17967.86 70.19 0.00 0.00 0.00 0.00 0.00 00:07:41.079 [2024-11-05T03:18:54.719Z] =================================================================================================================== 00:07:41.079 [2024-11-05T03:18:54.719Z] Total : 17967.86 70.19 0.00 0.00 0.00 0.00 0.00 00:07:41.079 00:07:41.650 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.650 Nvme0n1 : 8.00 17994.62 70.29 0.00 0.00 0.00 0.00 0.00 00:07:41.650 [2024-11-05T03:18:55.290Z] =================================================================================================================== 00:07:41.650 [2024-11-05T03:18:55.290Z] Total : 17994.62 70.29 0.00 0.00 0.00 0.00 0.00 00:07:41.650 00:07:43.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.035 Nvme0n1 : 9.00 18007.00 70.34 0.00 0.00 0.00 0.00 0.00 00:07:43.035 [2024-11-05T03:18:56.675Z] =================================================================================================================== 00:07:43.035 [2024-11-05T03:18:56.675Z] Total : 18007.00 70.34 0.00 0.00 0.00 0.00 0.00 00:07:43.035 00:07:43.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.977 Nvme0n1 : 10.00 18020.90 70.39 0.00 0.00 0.00 0.00 0.00 00:07:43.977 [2024-11-05T03:18:57.617Z] =================================================================================================================== 00:07:43.977 [2024-11-05T03:18:57.617Z] Total : 18020.90 70.39 0.00 0.00 0.00 0.00 0.00 00:07:43.977 00:07:43.977 00:07:43.977 Latency(us) 00:07:43.977 [2024-11-05T03:18:57.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.977 Nvme0n1 : 10.01 18022.80 70.40 0.00 0.00 7099.92 2976.43 14199.47 00:07:43.977 [2024-11-05T03:18:57.617Z] =================================================================================================================== 00:07:43.977 [2024-11-05T03:18:57.617Z] Total : 18022.80 70.40 0.00 0.00 7099.92 2976.43 14199.47 00:07:43.977 { 00:07:43.977 "results": [ 00:07:43.977 { 00:07:43.977 "job": "Nvme0n1", 00:07:43.977 "core_mask": "0x2", 00:07:43.977 "workload": "randwrite", 00:07:43.977 "status": "finished", 00:07:43.977 "queue_depth": 128, 00:07:43.977 "io_size": 4096, 00:07:43.977 "runtime": 10.006046, 00:07:43.977 "iops": 18022.803413056467, 00:07:43.977 "mibps": 70.40157583225182, 00:07:43.977 "io_failed": 0, 00:07:43.977 "io_timeout": 0, 00:07:43.977 "avg_latency_us": 7099.920640134857, 00:07:43.977 "min_latency_us": 2976.4266666666667, 00:07:43.977 "max_latency_us": 14199.466666666667 00:07:43.977 } 00:07:43.977 ], 00:07:43.977 "core_count": 1 00:07:43.977 } 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2809762 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2809762 ']' 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2809762 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2809762 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2809762' 00:07:43.977 killing process with pid 2809762 00:07:43.977 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2809762 00:07:43.977 Received shutdown signal, test time was about 10.000000 seconds 00:07:43.977 00:07:43.977 Latency(us) 00:07:43.977 [2024-11-05T03:18:57.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.977 [2024-11-05T03:18:57.617Z] =================================================================================================================== 00:07:43.977 [2024-11-05T03:18:57.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:43.978 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2809762 00:07:43.978 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:44.239 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.239 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:44.239 04:18:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:44.499 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:44.499 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:44.499 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:44.760 [2024-11-05 04:18:58.157965] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:44.760 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:44.760 request: 00:07:44.760 { 00:07:44.760 "uuid": "8cd85984-c7d3-49e0-be03-61811cc71a83", 00:07:44.760 "method": "bdev_lvol_get_lvstores", 00:07:44.760 "req_id": 1 00:07:44.760 } 00:07:44.760 Got JSON-RPC error response 00:07:44.761 response: 00:07:44.761 { 00:07:44.761 "code": -19, 00:07:44.761 "message": "No such device" 00:07:44.761 } 00:07:44.761 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:44.761 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:44.761 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:44.761 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:44.761 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.022 aio_bdev 00:07:45.022 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 731afe37-38f0-42b1-995c-95d52630e4db 00:07:45.022 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=731afe37-38f0-42b1-995c-95d52630e4db 00:07:45.022 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:45.022 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:45.022 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:45.022 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:45.022 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:45.282 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 731afe37-38f0-42b1-995c-95d52630e4db -t 2000 00:07:45.282 [ 00:07:45.282 { 00:07:45.282 "name": "731afe37-38f0-42b1-995c-95d52630e4db", 00:07:45.282 "aliases": [ 00:07:45.282 "lvs/lvol" 00:07:45.282 ], 00:07:45.282 "product_name": "Logical Volume", 00:07:45.282 "block_size": 4096, 00:07:45.282 "num_blocks": 38912, 00:07:45.282 "uuid": "731afe37-38f0-42b1-995c-95d52630e4db", 00:07:45.282 "assigned_rate_limits": { 00:07:45.282 "rw_ios_per_sec": 0, 00:07:45.282 "rw_mbytes_per_sec": 0, 00:07:45.282 "r_mbytes_per_sec": 0, 00:07:45.282 "w_mbytes_per_sec": 0 00:07:45.282 }, 00:07:45.282 "claimed": false, 00:07:45.282 "zoned": false, 00:07:45.282 "supported_io_types": { 00:07:45.282 "read": true, 00:07:45.282 "write": true, 00:07:45.282 "unmap": true, 00:07:45.282 "flush": false, 00:07:45.282 "reset": true, 00:07:45.282 "nvme_admin": false, 00:07:45.282 "nvme_io": false, 00:07:45.282 "nvme_io_md": false, 00:07:45.282 "write_zeroes": true, 00:07:45.282 "zcopy": false, 00:07:45.282 "get_zone_info": false, 00:07:45.282 "zone_management": false, 00:07:45.282 "zone_append": false, 00:07:45.282 "compare": false, 00:07:45.282 "compare_and_write": false, 00:07:45.282 "abort": false, 00:07:45.282 "seek_hole": true, 00:07:45.282 "seek_data": true, 00:07:45.282 "copy": false, 00:07:45.282 "nvme_iov_md": false 00:07:45.282 }, 00:07:45.282 "driver_specific": { 00:07:45.282 "lvol": { 00:07:45.282 "lvol_store_uuid": "8cd85984-c7d3-49e0-be03-61811cc71a83", 00:07:45.282 "base_bdev": "aio_bdev", 00:07:45.282 "thin_provision": false, 00:07:45.282 "num_allocated_clusters": 38, 00:07:45.282 "snapshot": false, 00:07:45.282 "clone": false, 00:07:45.282 "esnap_clone": false 00:07:45.282 } 00:07:45.282 } 00:07:45.282 } 00:07:45.282 ] 00:07:45.282 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:45.282 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:45.282 04:18:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:45.543 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:45.543 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:45.543 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:45.804 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:45.804 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 731afe37-38f0-42b1-995c-95d52630e4db 00:07:45.804 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8cd85984-c7d3-49e0-be03-61811cc71a83 00:07:46.066 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.326 00:07:46.326 real 0m15.635s 00:07:46.326 user 0m15.373s 00:07:46.326 sys 0m1.288s 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:46.326 ************************************ 00:07:46.326 END TEST lvs_grow_clean 00:07:46.326 ************************************ 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:46.326 ************************************ 00:07:46.326 START TEST lvs_grow_dirty 00:07:46.326 ************************************ 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:46.326 04:18:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:46.587 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:46.587 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:46.587 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c014c2ed-1393-4634-8a22-249575315636 00:07:46.587 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:07:46.587 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:46.848 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:46.848 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:46.848 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c014c2ed-1393-4634-8a22-249575315636 lvol 150 00:07:47.108 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8b6692cc-3b6b-425d-a5e4-161f66449921 00:07:47.108 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.108 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:47.108 [2024-11-05 04:19:00.656262] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:47.108 [2024-11-05 04:19:00.656316] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:47.108 true 00:07:47.108 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:07:47.108 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:47.369 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:47.369 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:47.369 04:19:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8b6692cc-3b6b-425d-a5e4-161f66449921 00:07:47.630 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:47.890 [2024-11-05 04:19:01.298238] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2812984 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2812984 /var/tmp/bdevperf.sock 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2812984 ']' 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:47.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:47.890 04:19:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:48.151 [2024-11-05 04:19:01.532928] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:07:48.151 [2024-11-05 04:19:01.532985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2812984 ] 00:07:48.151 [2024-11-05 04:19:01.617202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.151 [2024-11-05 04:19:01.647052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.722 04:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:48.722 04:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:48.722 04:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:49.295 Nvme0n1 00:07:49.295 04:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:49.295 [ 00:07:49.295 { 00:07:49.295 "name": "Nvme0n1", 00:07:49.295 "aliases": [ 00:07:49.295 "8b6692cc-3b6b-425d-a5e4-161f66449921" 00:07:49.295 ], 00:07:49.295 "product_name": "NVMe disk", 00:07:49.295 "block_size": 4096, 00:07:49.295 "num_blocks": 38912, 00:07:49.295 "uuid": "8b6692cc-3b6b-425d-a5e4-161f66449921", 00:07:49.295 "numa_id": 0, 00:07:49.295 "assigned_rate_limits": { 00:07:49.295 "rw_ios_per_sec": 0, 00:07:49.295 "rw_mbytes_per_sec": 0, 00:07:49.295 "r_mbytes_per_sec": 0, 00:07:49.295 "w_mbytes_per_sec": 0 00:07:49.295 }, 00:07:49.295 "claimed": false, 00:07:49.296 "zoned": false, 00:07:49.296 "supported_io_types": { 00:07:49.296 "read": true, 00:07:49.296 "write": true, 00:07:49.296 "unmap": true, 00:07:49.296 "flush": true, 00:07:49.296 "reset": true, 00:07:49.296 "nvme_admin": true, 00:07:49.296 "nvme_io": true, 00:07:49.296 "nvme_io_md": false, 00:07:49.296 "write_zeroes": true, 00:07:49.296 "zcopy": false, 00:07:49.296 "get_zone_info": false, 00:07:49.296 "zone_management": false, 00:07:49.296 "zone_append": false, 00:07:49.296 "compare": true, 00:07:49.296 "compare_and_write": true, 00:07:49.296 "abort": true, 00:07:49.296 "seek_hole": false, 00:07:49.296 "seek_data": false, 00:07:49.296 "copy": true, 00:07:49.296 "nvme_iov_md": false 00:07:49.296 }, 00:07:49.296 "memory_domains": [ 00:07:49.296 { 00:07:49.296 "dma_device_id": "system", 00:07:49.296 "dma_device_type": 1 00:07:49.296 } 00:07:49.296 ], 00:07:49.296 "driver_specific": { 00:07:49.296 "nvme": [ 00:07:49.296 { 00:07:49.296 "trid": { 00:07:49.296 "trtype": "TCP", 00:07:49.296 "adrfam": "IPv4", 00:07:49.296 "traddr": "10.0.0.2", 00:07:49.296 "trsvcid": "4420", 00:07:49.296 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:49.296 }, 00:07:49.296 "ctrlr_data": { 00:07:49.296 "cntlid": 1, 00:07:49.296 "vendor_id": "0x8086", 00:07:49.296 "model_number": "SPDK bdev Controller", 00:07:49.296 "serial_number": "SPDK0", 00:07:49.296 "firmware_revision": "25.01", 00:07:49.296 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:49.296 "oacs": { 00:07:49.296 "security": 0, 00:07:49.296 "format": 0, 00:07:49.296 "firmware": 0, 00:07:49.296 "ns_manage": 0 00:07:49.296 }, 00:07:49.296 "multi_ctrlr": true, 00:07:49.296 "ana_reporting": false 00:07:49.296 }, 00:07:49.296 "vs": { 00:07:49.296 "nvme_version": "1.3" 00:07:49.296 }, 00:07:49.296 "ns_data": { 00:07:49.296 "id": 1, 00:07:49.296 "can_share": true 00:07:49.296 } 00:07:49.296 } 00:07:49.296 ], 00:07:49.296 "mp_policy": "active_passive" 00:07:49.296 } 00:07:49.296 } 00:07:49.296 ] 00:07:49.296 04:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2813227 00:07:49.296 04:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:49.296 04:19:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:49.557 Running I/O for 10 seconds... 00:07:50.498 Latency(us) 00:07:50.498 [2024-11-05T03:19:04.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.498 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.498 Nvme0n1 : 1.00 17657.00 68.97 0.00 0.00 0.00 0.00 0.00 00:07:50.498 [2024-11-05T03:19:04.138Z] =================================================================================================================== 00:07:50.498 [2024-11-05T03:19:04.138Z] Total : 17657.00 68.97 0.00 0.00 0.00 0.00 0.00 00:07:50.498 00:07:51.438 04:19:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c014c2ed-1393-4634-8a22-249575315636 00:07:51.438 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:51.438 Nvme0n1 : 2.00 17811.50 69.58 0.00 0.00 0.00 0.00 0.00 00:07:51.438 [2024-11-05T03:19:05.078Z] =================================================================================================================== 00:07:51.438 [2024-11-05T03:19:05.078Z] Total : 17811.50 69.58 0.00 0.00 0.00 0.00 0.00 00:07:51.438 00:07:51.698 true 00:07:51.698 04:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:07:51.698 04:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:51.698 04:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:51.698 04:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:51.698 04:19:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2813227 00:07:52.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:52.640 Nvme0n1 : 3.00 17881.67 69.85 0.00 0.00 0.00 0.00 0.00 00:07:52.640 [2024-11-05T03:19:06.280Z] =================================================================================================================== 00:07:52.640 [2024-11-05T03:19:06.280Z] Total : 17881.67 69.85 0.00 0.00 0.00 0.00 0.00 00:07:52.640 00:07:53.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:53.580 Nvme0n1 : 4.00 17921.75 70.01 0.00 0.00 0.00 0.00 0.00 00:07:53.580 [2024-11-05T03:19:07.220Z] =================================================================================================================== 00:07:53.580 [2024-11-05T03:19:07.220Z] Total : 17921.75 70.01 0.00 0.00 0.00 0.00 0.00 00:07:53.580 00:07:54.521 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:54.521 Nvme0n1 : 5.00 17947.20 70.11 0.00 0.00 0.00 0.00 0.00 00:07:54.521 [2024-11-05T03:19:08.161Z] =================================================================================================================== 00:07:54.521 [2024-11-05T03:19:08.161Z] Total : 17947.20 70.11 0.00 0.00 0.00 0.00 0.00 00:07:54.521 00:07:55.462 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.462 Nvme0n1 : 6.00 17972.83 70.21 0.00 0.00 0.00 0.00 0.00 00:07:55.462 [2024-11-05T03:19:09.102Z] =================================================================================================================== 00:07:55.462 [2024-11-05T03:19:09.102Z] Total : 17972.83 70.21 0.00 0.00 0.00 0.00 0.00 00:07:55.462 00:07:56.406 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.406 Nvme0n1 : 7.00 17995.29 70.29 0.00 0.00 0.00 0.00 0.00 00:07:56.406 [2024-11-05T03:19:10.046Z] =================================================================================================================== 00:07:56.406 [2024-11-05T03:19:10.046Z] Total : 17995.29 70.29 0.00 0.00 0.00 0.00 0.00 00:07:56.406 00:07:57.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.790 Nvme0n1 : 8.00 18006.50 70.34 0.00 0.00 0.00 0.00 0.00 00:07:57.790 [2024-11-05T03:19:11.430Z] =================================================================================================================== 00:07:57.790 [2024-11-05T03:19:11.430Z] Total : 18006.50 70.34 0.00 0.00 0.00 0.00 0.00 00:07:57.790 00:07:58.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.732 Nvme0n1 : 9.00 18018.22 70.38 0.00 0.00 0.00 0.00 0.00 00:07:58.732 [2024-11-05T03:19:12.372Z] =================================================================================================================== 00:07:58.732 [2024-11-05T03:19:12.372Z] Total : 18018.22 70.38 0.00 0.00 0.00 0.00 0.00 00:07:58.732 00:07:59.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.675 Nvme0n1 : 10.00 18031.70 70.44 0.00 0.00 0.00 0.00 0.00 00:07:59.675 [2024-11-05T03:19:13.315Z] =================================================================================================================== 00:07:59.675 [2024-11-05T03:19:13.315Z] Total : 18031.70 70.44 0.00 0.00 0.00 0.00 0.00 00:07:59.675 00:07:59.675 00:07:59.675 Latency(us) 00:07:59.675 [2024-11-05T03:19:13.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.675 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.675 Nvme0n1 : 10.01 18032.91 70.44 0.00 0.00 7095.92 4314.45 18131.63 00:07:59.675 [2024-11-05T03:19:13.315Z] =================================================================================================================== 00:07:59.675 [2024-11-05T03:19:13.315Z] Total : 18032.91 70.44 0.00 0.00 7095.92 4314.45 18131.63 00:07:59.675 { 00:07:59.675 "results": [ 00:07:59.675 { 00:07:59.675 "job": "Nvme0n1", 00:07:59.675 "core_mask": "0x2", 00:07:59.675 "workload": "randwrite", 00:07:59.675 "status": "finished", 00:07:59.675 "queue_depth": 128, 00:07:59.675 "io_size": 4096, 00:07:59.675 "runtime": 10.006425, 00:07:59.675 "iops": 18032.913852849546, 00:07:59.675 "mibps": 70.44106973769354, 00:07:59.675 "io_failed": 0, 00:07:59.675 "io_timeout": 0, 00:07:59.675 "avg_latency_us": 7095.9231570838765, 00:07:59.675 "min_latency_us": 4314.453333333333, 00:07:59.675 "max_latency_us": 18131.626666666667 00:07:59.675 } 00:07:59.675 ], 00:07:59.675 "core_count": 1 00:07:59.675 } 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2812984 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2812984 ']' 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2812984 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2812984 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2812984' 00:07:59.675 killing process with pid 2812984 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2812984 00:07:59.675 Received shutdown signal, test time was about 10.000000 seconds 00:07:59.675 00:07:59.675 Latency(us) 00:07:59.675 [2024-11-05T03:19:13.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.675 [2024-11-05T03:19:13.315Z] =================================================================================================================== 00:07:59.675 [2024-11-05T03:19:13.315Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2812984 00:07:59.675 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:59.936 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:59.936 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:07:59.936 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2809186 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2809186 00:08:00.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2809186 Killed "${NVMF_APP[@]}" "$@" 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2815361 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2815361 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2815361 ']' 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.197 04:19:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:00.197 [2024-11-05 04:19:13.800820] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:00.197 [2024-11-05 04:19:13.800875] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.458 [2024-11-05 04:19:13.876729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.458 [2024-11-05 04:19:13.911171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.458 [2024-11-05 04:19:13.911206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.458 [2024-11-05 04:19:13.911214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.458 [2024-11-05 04:19:13.911220] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.458 [2024-11-05 04:19:13.911226] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.458 [2024-11-05 04:19:13.911801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.029 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.029 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:08:01.029 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:01.029 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:01.029 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:01.029 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.029 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:01.290 [2024-11-05 04:19:14.790321] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:01.290 [2024-11-05 04:19:14.790412] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:01.290 [2024-11-05 04:19:14.790443] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:01.290 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:01.290 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8b6692cc-3b6b-425d-a5e4-161f66449921 00:08:01.290 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=8b6692cc-3b6b-425d-a5e4-161f66449921 00:08:01.290 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:01.290 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:01.290 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:01.290 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:01.290 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:01.551 04:19:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8b6692cc-3b6b-425d-a5e4-161f66449921 -t 2000 00:08:01.551 [ 00:08:01.551 { 00:08:01.551 "name": "8b6692cc-3b6b-425d-a5e4-161f66449921", 00:08:01.551 "aliases": [ 00:08:01.551 "lvs/lvol" 00:08:01.551 ], 00:08:01.551 "product_name": "Logical Volume", 00:08:01.551 "block_size": 4096, 00:08:01.551 "num_blocks": 38912, 00:08:01.551 "uuid": "8b6692cc-3b6b-425d-a5e4-161f66449921", 00:08:01.551 "assigned_rate_limits": { 00:08:01.551 "rw_ios_per_sec": 0, 00:08:01.551 "rw_mbytes_per_sec": 0, 00:08:01.551 "r_mbytes_per_sec": 0, 00:08:01.551 "w_mbytes_per_sec": 0 00:08:01.551 }, 00:08:01.551 "claimed": false, 00:08:01.551 "zoned": false, 00:08:01.551 "supported_io_types": { 00:08:01.551 "read": true, 00:08:01.551 "write": true, 00:08:01.551 "unmap": true, 00:08:01.551 "flush": false, 00:08:01.551 "reset": true, 00:08:01.551 "nvme_admin": false, 00:08:01.551 "nvme_io": false, 00:08:01.551 "nvme_io_md": false, 00:08:01.551 "write_zeroes": true, 00:08:01.551 "zcopy": false, 00:08:01.551 "get_zone_info": false, 00:08:01.551 "zone_management": false, 00:08:01.551 "zone_append": false, 00:08:01.551 "compare": false, 00:08:01.551 "compare_and_write": false, 00:08:01.551 "abort": false, 00:08:01.551 "seek_hole": true, 00:08:01.551 "seek_data": true, 00:08:01.551 "copy": false, 00:08:01.551 "nvme_iov_md": false 00:08:01.551 }, 00:08:01.551 "driver_specific": { 00:08:01.551 "lvol": { 00:08:01.551 "lvol_store_uuid": "c014c2ed-1393-4634-8a22-249575315636", 00:08:01.551 "base_bdev": "aio_bdev", 00:08:01.551 "thin_provision": false, 00:08:01.551 "num_allocated_clusters": 38, 00:08:01.551 "snapshot": false, 00:08:01.551 "clone": false, 00:08:01.551 "esnap_clone": false 00:08:01.551 } 00:08:01.551 } 00:08:01.551 } 00:08:01.551 ] 00:08:01.551 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:01.551 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:08:01.551 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:01.812 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:01.812 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:08:01.812 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:02.073 [2024-11-05 04:19:15.638542] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:02.073 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:08:02.342 request: 00:08:02.342 { 00:08:02.342 "uuid": "c014c2ed-1393-4634-8a22-249575315636", 00:08:02.342 "method": "bdev_lvol_get_lvstores", 00:08:02.342 "req_id": 1 00:08:02.342 } 00:08:02.342 Got JSON-RPC error response 00:08:02.342 response: 00:08:02.342 { 00:08:02.342 "code": -19, 00:08:02.342 "message": "No such device" 00:08:02.342 } 00:08:02.342 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:08:02.342 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:02.342 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:02.342 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:02.342 04:19:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:02.603 aio_bdev 00:08:02.603 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8b6692cc-3b6b-425d-a5e4-161f66449921 00:08:02.603 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=8b6692cc-3b6b-425d-a5e4-161f66449921 00:08:02.603 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:08:02.603 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:08:02.603 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:08:02.603 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:08:02.603 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:02.603 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8b6692cc-3b6b-425d-a5e4-161f66449921 -t 2000 00:08:02.864 [ 00:08:02.864 { 00:08:02.864 "name": "8b6692cc-3b6b-425d-a5e4-161f66449921", 00:08:02.864 "aliases": [ 00:08:02.864 "lvs/lvol" 00:08:02.864 ], 00:08:02.864 "product_name": "Logical Volume", 00:08:02.864 "block_size": 4096, 00:08:02.864 "num_blocks": 38912, 00:08:02.864 "uuid": "8b6692cc-3b6b-425d-a5e4-161f66449921", 00:08:02.864 "assigned_rate_limits": { 00:08:02.864 "rw_ios_per_sec": 0, 00:08:02.864 "rw_mbytes_per_sec": 0, 00:08:02.864 "r_mbytes_per_sec": 0, 00:08:02.864 "w_mbytes_per_sec": 0 00:08:02.864 }, 00:08:02.864 "claimed": false, 00:08:02.864 "zoned": false, 00:08:02.864 "supported_io_types": { 00:08:02.864 "read": true, 00:08:02.864 "write": true, 00:08:02.864 "unmap": true, 00:08:02.864 "flush": false, 00:08:02.864 "reset": true, 00:08:02.864 "nvme_admin": false, 00:08:02.864 "nvme_io": false, 00:08:02.864 "nvme_io_md": false, 00:08:02.864 "write_zeroes": true, 00:08:02.864 "zcopy": false, 00:08:02.864 "get_zone_info": false, 00:08:02.864 "zone_management": false, 00:08:02.864 "zone_append": false, 00:08:02.864 "compare": false, 00:08:02.864 "compare_and_write": false, 00:08:02.864 "abort": false, 00:08:02.864 "seek_hole": true, 00:08:02.864 "seek_data": true, 00:08:02.864 "copy": false, 00:08:02.864 "nvme_iov_md": false 00:08:02.864 }, 00:08:02.864 "driver_specific": { 00:08:02.864 "lvol": { 00:08:02.864 "lvol_store_uuid": "c014c2ed-1393-4634-8a22-249575315636", 00:08:02.864 "base_bdev": "aio_bdev", 00:08:02.864 "thin_provision": false, 00:08:02.864 "num_allocated_clusters": 38, 00:08:02.864 "snapshot": false, 00:08:02.864 "clone": false, 00:08:02.864 "esnap_clone": false 00:08:02.864 } 00:08:02.864 } 00:08:02.864 } 00:08:02.864 ] 00:08:02.864 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:08:02.864 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:08:02.864 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:02.864 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:03.124 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c014c2ed-1393-4634-8a22-249575315636 00:08:03.124 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:03.124 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:03.124 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8b6692cc-3b6b-425d-a5e4-161f66449921 00:08:03.385 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c014c2ed-1393-4634-8a22-249575315636 00:08:03.385 04:19:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:03.646 00:08:03.646 real 0m17.356s 00:08:03.646 user 0m44.991s 00:08:03.646 sys 0m2.855s 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:03.646 ************************************ 00:08:03.646 END TEST lvs_grow_dirty 00:08:03.646 ************************************ 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:03.646 nvmf_trace.0 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:03.646 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:03.646 rmmod nvme_tcp 00:08:03.906 rmmod nvme_fabrics 00:08:03.906 rmmod nvme_keyring 00:08:03.906 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:03.906 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:03.906 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:03.906 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2815361 ']' 00:08:03.906 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2815361 00:08:03.906 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2815361 ']' 00:08:03.906 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2815361 00:08:03.906 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:08:03.906 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2815361 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2815361' 00:08:03.907 killing process with pid 2815361 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2815361 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2815361 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.907 04:19:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:06.453 00:08:06.453 real 0m43.610s 00:08:06.453 user 1m6.542s 00:08:06.453 sys 0m10.121s 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:06.453 ************************************ 00:08:06.453 END TEST nvmf_lvs_grow 00:08:06.453 ************************************ 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.453 ************************************ 00:08:06.453 START TEST nvmf_bdev_io_wait 00:08:06.453 ************************************ 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:06.453 * Looking for test storage... 00:08:06.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:06.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.453 --rc genhtml_branch_coverage=1 00:08:06.453 --rc genhtml_function_coverage=1 00:08:06.453 --rc genhtml_legend=1 00:08:06.453 --rc geninfo_all_blocks=1 00:08:06.453 --rc geninfo_unexecuted_blocks=1 00:08:06.453 00:08:06.453 ' 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:06.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.453 --rc genhtml_branch_coverage=1 00:08:06.453 --rc genhtml_function_coverage=1 00:08:06.453 --rc genhtml_legend=1 00:08:06.453 --rc geninfo_all_blocks=1 00:08:06.453 --rc geninfo_unexecuted_blocks=1 00:08:06.453 00:08:06.453 ' 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:06.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.453 --rc genhtml_branch_coverage=1 00:08:06.453 --rc genhtml_function_coverage=1 00:08:06.453 --rc genhtml_legend=1 00:08:06.453 --rc geninfo_all_blocks=1 00:08:06.453 --rc geninfo_unexecuted_blocks=1 00:08:06.453 00:08:06.453 ' 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:06.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.453 --rc genhtml_branch_coverage=1 00:08:06.453 --rc genhtml_function_coverage=1 00:08:06.453 --rc genhtml_legend=1 00:08:06.453 --rc geninfo_all_blocks=1 00:08:06.453 --rc geninfo_unexecuted_blocks=1 00:08:06.453 00:08:06.453 ' 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:06.453 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:06.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:06.454 04:19:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:14.597 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:14.598 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:14.598 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:14.598 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:14.598 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:14.598 04:19:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:14.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:14.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.630 ms 00:08:14.598 00:08:14.598 --- 10.0.0.2 ping statistics --- 00:08:14.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.598 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:14.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:14.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:08:14.598 00:08:14.598 --- 10.0.0.1 ping statistics --- 00:08:14.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:14.598 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2820430 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2820430 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:14.598 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2820430 ']' 00:08:14.599 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:14.599 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:14.599 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:14.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:14.599 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:14.599 04:19:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.599 [2024-11-05 04:19:27.377996] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:14.599 [2024-11-05 04:19:27.378046] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:14.599 [2024-11-05 04:19:27.454830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.599 [2024-11-05 04:19:27.492178] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:14.599 [2024-11-05 04:19:27.492212] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:14.599 [2024-11-05 04:19:27.492221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:14.599 [2024-11-05 04:19:27.492228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:14.599 [2024-11-05 04:19:27.492233] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:14.599 [2024-11-05 04:19:27.493769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.599 [2024-11-05 04:19:27.493884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.599 [2024-11-05 04:19:27.494120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.599 [2024-11-05 04:19:27.494121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.599 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.860 [2024-11-05 04:19:28.283009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.860 Malloc0 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.860 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:14.861 [2024-11-05 04:19:28.342312] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2820522 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2820525 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.861 { 00:08:14.861 "params": { 00:08:14.861 "name": "Nvme$subsystem", 00:08:14.861 "trtype": "$TEST_TRANSPORT", 00:08:14.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.861 "adrfam": "ipv4", 00:08:14.861 "trsvcid": "$NVMF_PORT", 00:08:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.861 "hdgst": ${hdgst:-false}, 00:08:14.861 "ddgst": ${ddgst:-false} 00:08:14.861 }, 00:08:14.861 "method": "bdev_nvme_attach_controller" 00:08:14.861 } 00:08:14.861 EOF 00:08:14.861 )") 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2820527 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2820531 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.861 { 00:08:14.861 "params": { 00:08:14.861 "name": "Nvme$subsystem", 00:08:14.861 "trtype": "$TEST_TRANSPORT", 00:08:14.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.861 "adrfam": "ipv4", 00:08:14.861 "trsvcid": "$NVMF_PORT", 00:08:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.861 "hdgst": ${hdgst:-false}, 00:08:14.861 "ddgst": ${ddgst:-false} 00:08:14.861 }, 00:08:14.861 "method": "bdev_nvme_attach_controller" 00:08:14.861 } 00:08:14.861 EOF 00:08:14.861 )") 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.861 { 00:08:14.861 "params": { 00:08:14.861 "name": "Nvme$subsystem", 00:08:14.861 "trtype": "$TEST_TRANSPORT", 00:08:14.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.861 "adrfam": "ipv4", 00:08:14.861 "trsvcid": "$NVMF_PORT", 00:08:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.861 "hdgst": ${hdgst:-false}, 00:08:14.861 "ddgst": ${ddgst:-false} 00:08:14.861 }, 00:08:14.861 "method": "bdev_nvme_attach_controller" 00:08:14.861 } 00:08:14.861 EOF 00:08:14.861 )") 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:14.861 { 00:08:14.861 "params": { 00:08:14.861 "name": "Nvme$subsystem", 00:08:14.861 "trtype": "$TEST_TRANSPORT", 00:08:14.861 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:14.861 "adrfam": "ipv4", 00:08:14.861 "trsvcid": "$NVMF_PORT", 00:08:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:14.861 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:14.861 "hdgst": ${hdgst:-false}, 00:08:14.861 "ddgst": ${ddgst:-false} 00:08:14.861 }, 00:08:14.861 "method": "bdev_nvme_attach_controller" 00:08:14.861 } 00:08:14.861 EOF 00:08:14.861 )") 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2820522 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.861 "params": { 00:08:14.861 "name": "Nvme1", 00:08:14.861 "trtype": "tcp", 00:08:14.861 "traddr": "10.0.0.2", 00:08:14.861 "adrfam": "ipv4", 00:08:14.861 "trsvcid": "4420", 00:08:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.861 "hdgst": false, 00:08:14.861 "ddgst": false 00:08:14.861 }, 00:08:14.861 "method": "bdev_nvme_attach_controller" 00:08:14.861 }' 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.861 "params": { 00:08:14.861 "name": "Nvme1", 00:08:14.861 "trtype": "tcp", 00:08:14.861 "traddr": "10.0.0.2", 00:08:14.861 "adrfam": "ipv4", 00:08:14.861 "trsvcid": "4420", 00:08:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.861 "hdgst": false, 00:08:14.861 "ddgst": false 00:08:14.861 }, 00:08:14.861 "method": "bdev_nvme_attach_controller" 00:08:14.861 }' 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.861 "params": { 00:08:14.861 "name": "Nvme1", 00:08:14.861 "trtype": "tcp", 00:08:14.861 "traddr": "10.0.0.2", 00:08:14.861 "adrfam": "ipv4", 00:08:14.861 "trsvcid": "4420", 00:08:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.861 "hdgst": false, 00:08:14.861 "ddgst": false 00:08:14.861 }, 00:08:14.861 "method": "bdev_nvme_attach_controller" 00:08:14.861 }' 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:14.861 04:19:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:14.861 "params": { 00:08:14.861 "name": "Nvme1", 00:08:14.861 "trtype": "tcp", 00:08:14.861 "traddr": "10.0.0.2", 00:08:14.861 "adrfam": "ipv4", 00:08:14.861 "trsvcid": "4420", 00:08:14.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:14.861 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:14.861 "hdgst": false, 00:08:14.861 "ddgst": false 00:08:14.861 }, 00:08:14.861 "method": "bdev_nvme_attach_controller" 00:08:14.861 }' 00:08:14.861 [2024-11-05 04:19:28.394863] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:14.861 [2024-11-05 04:19:28.394917] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:14.861 [2024-11-05 04:19:28.399534] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:14.861 [2024-11-05 04:19:28.399579] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:14.862 [2024-11-05 04:19:28.402238] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:14.862 [2024-11-05 04:19:28.402285] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:14.862 [2024-11-05 04:19:28.402641] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:14.862 [2024-11-05 04:19:28.402689] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:15.123 [2024-11-05 04:19:28.547990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.123 [2024-11-05 04:19:28.577885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:15.123 [2024-11-05 04:19:28.603558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.123 [2024-11-05 04:19:28.632978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:15.123 [2024-11-05 04:19:28.650762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.123 [2024-11-05 04:19:28.679460] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:15.123 [2024-11-05 04:19:28.712292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.123 [2024-11-05 04:19:28.741026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:15.383 Running I/O for 1 seconds... 00:08:15.384 Running I/O for 1 seconds... 00:08:15.384 Running I/O for 1 seconds... 00:08:15.644 Running I/O for 1 seconds... 00:08:16.215 11951.00 IOPS, 46.68 MiB/s 00:08:16.215 Latency(us) 00:08:16.215 [2024-11-05T03:19:29.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.215 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:16.215 Nvme1n1 : 1.01 11973.37 46.77 0.00 0.00 10642.33 4396.37 13817.17 00:08:16.215 [2024-11-05T03:19:29.855Z] =================================================================================================================== 00:08:16.215 [2024-11-05T03:19:29.855Z] Total : 11973.37 46.77 0.00 0.00 10642.33 4396.37 13817.17 00:08:16.476 12393.00 IOPS, 48.41 MiB/s 00:08:16.476 Latency(us) 00:08:16.476 [2024-11-05T03:19:30.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.476 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:16.476 Nvme1n1 : 1.01 12434.68 48.57 0.00 0.00 10254.77 5434.03 19223.89 00:08:16.476 [2024-11-05T03:19:30.116Z] =================================================================================================================== 00:08:16.476 [2024-11-05T03:19:30.116Z] Total : 12434.68 48.57 0.00 0.00 10254.77 5434.03 19223.89 00:08:16.476 11768.00 IOPS, 45.97 MiB/s 00:08:16.476 Latency(us) 00:08:16.476 [2024-11-05T03:19:30.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.477 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:16.477 Nvme1n1 : 1.01 11894.75 46.46 0.00 0.00 10738.19 2730.67 26432.85 00:08:16.477 [2024-11-05T03:19:30.117Z] =================================================================================================================== 00:08:16.477 [2024-11-05T03:19:30.117Z] Total : 11894.75 46.46 0.00 0.00 10738.19 2730.67 26432.85 00:08:16.477 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2820525 00:08:16.477 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2820527 00:08:16.477 183712.00 IOPS, 717.62 MiB/s 00:08:16.477 Latency(us) 00:08:16.477 [2024-11-05T03:19:30.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:16.477 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:16.477 Nvme1n1 : 1.00 183344.88 716.19 0.00 0.00 694.25 298.67 1966.08 00:08:16.477 [2024-11-05T03:19:30.117Z] =================================================================================================================== 00:08:16.477 [2024-11-05T03:19:30.117Z] Total : 183344.88 716.19 0.00 0.00 694.25 298.67 1966.08 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2820531 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:16.738 rmmod nvme_tcp 00:08:16.738 rmmod nvme_fabrics 00:08:16.738 rmmod nvme_keyring 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2820430 ']' 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2820430 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2820430 ']' 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2820430 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2820430 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2820430' 00:08:16.738 killing process with pid 2820430 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2820430 00:08:16.738 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2820430 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:17.000 04:19:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.915 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:18.915 00:08:18.915 real 0m12.804s 00:08:18.915 user 0m19.229s 00:08:18.915 sys 0m6.887s 00:08:18.915 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:18.915 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.915 ************************************ 00:08:18.915 END TEST nvmf_bdev_io_wait 00:08:18.915 ************************************ 00:08:18.915 04:19:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:18.915 04:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:18.915 04:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.915 04:19:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:19.176 ************************************ 00:08:19.176 START TEST nvmf_queue_depth 00:08:19.176 ************************************ 00:08:19.176 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:19.176 * Looking for test storage... 00:08:19.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.176 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:19.176 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:08:19.176 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:19.176 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:19.176 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:19.176 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:19.176 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:19.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.177 --rc genhtml_branch_coverage=1 00:08:19.177 --rc genhtml_function_coverage=1 00:08:19.177 --rc genhtml_legend=1 00:08:19.177 --rc geninfo_all_blocks=1 00:08:19.177 --rc geninfo_unexecuted_blocks=1 00:08:19.177 00:08:19.177 ' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:19.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.177 --rc genhtml_branch_coverage=1 00:08:19.177 --rc genhtml_function_coverage=1 00:08:19.177 --rc genhtml_legend=1 00:08:19.177 --rc geninfo_all_blocks=1 00:08:19.177 --rc geninfo_unexecuted_blocks=1 00:08:19.177 00:08:19.177 ' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:19.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.177 --rc genhtml_branch_coverage=1 00:08:19.177 --rc genhtml_function_coverage=1 00:08:19.177 --rc genhtml_legend=1 00:08:19.177 --rc geninfo_all_blocks=1 00:08:19.177 --rc geninfo_unexecuted_blocks=1 00:08:19.177 00:08:19.177 ' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:19.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:19.177 --rc genhtml_branch_coverage=1 00:08:19.177 --rc genhtml_function_coverage=1 00:08:19.177 --rc genhtml_legend=1 00:08:19.177 --rc geninfo_all_blocks=1 00:08:19.177 --rc geninfo_unexecuted_blocks=1 00:08:19.177 00:08:19.177 ' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:19.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:19.177 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.178 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.178 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.178 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:19.178 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:19.178 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:19.178 04:19:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:27.404 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:27.404 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:27.404 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:27.405 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:27.405 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:27.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:27.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:08:27.405 00:08:27.405 --- 10.0.0.2 ping statistics --- 00:08:27.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.405 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:27.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:27.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:08:27.405 00:08:27.405 --- 10.0.0.1 ping statistics --- 00:08:27.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:27.405 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:27.405 04:19:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2825171 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2825171 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2825171 ']' 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.405 [2024-11-05 04:19:40.080431] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:27.405 [2024-11-05 04:19:40.080503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:27.405 [2024-11-05 04:19:40.181858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.405 [2024-11-05 04:19:40.232955] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:27.405 [2024-11-05 04:19:40.233010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:27.405 [2024-11-05 04:19:40.233018] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:27.405 [2024-11-05 04:19:40.233026] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:27.405 [2024-11-05 04:19:40.233032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:27.405 [2024-11-05 04:19:40.233840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.405 [2024-11-05 04:19:40.947152] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:27.405 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.406 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.406 Malloc0 00:08:27.406 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.406 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:27.406 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.406 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.406 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.406 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:27.406 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.406 04:19:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.406 [2024-11-05 04:19:41.008583] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2825453 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2825453 /var/tmp/bdevperf.sock 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2825453 ']' 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:27.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.406 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:27.666 [2024-11-05 04:19:41.066362] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:27.666 [2024-11-05 04:19:41.066429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2825453 ] 00:08:27.666 [2024-11-05 04:19:41.141801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.666 [2024-11-05 04:19:41.183783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.238 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:28.238 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:08:28.498 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:28.498 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.498 04:19:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:28.498 NVMe0n1 00:08:28.498 04:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.498 04:19:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:28.759 Running I/O for 10 seconds... 00:08:30.641 8703.00 IOPS, 34.00 MiB/s [2024-11-05T03:19:45.222Z] 9734.50 IOPS, 38.03 MiB/s [2024-11-05T03:19:46.607Z] 10547.00 IOPS, 41.20 MiB/s [2024-11-05T03:19:47.178Z] 10806.25 IOPS, 42.21 MiB/s [2024-11-05T03:19:48.584Z] 11056.60 IOPS, 43.19 MiB/s [2024-11-05T03:19:49.527Z] 11096.17 IOPS, 43.34 MiB/s [2024-11-05T03:19:50.469Z] 11242.29 IOPS, 43.92 MiB/s [2024-11-05T03:19:51.412Z] 11264.88 IOPS, 44.00 MiB/s [2024-11-05T03:19:52.354Z] 11356.44 IOPS, 44.36 MiB/s [2024-11-05T03:19:52.354Z] 11366.90 IOPS, 44.40 MiB/s 00:08:38.714 Latency(us) 00:08:38.714 [2024-11-05T03:19:52.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.714 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:38.714 Verification LBA range: start 0x0 length 0x4000 00:08:38.714 NVMe0n1 : 10.05 11408.06 44.56 0.00 0.00 89458.09 12615.68 76458.67 00:08:38.714 [2024-11-05T03:19:52.354Z] =================================================================================================================== 00:08:38.714 [2024-11-05T03:19:52.354Z] Total : 11408.06 44.56 0.00 0.00 89458.09 12615.68 76458.67 00:08:38.714 { 00:08:38.714 "results": [ 00:08:38.714 { 00:08:38.714 "job": "NVMe0n1", 00:08:38.714 "core_mask": "0x1", 00:08:38.714 "workload": "verify", 00:08:38.714 "status": "finished", 00:08:38.714 "verify_range": { 00:08:38.714 "start": 0, 00:08:38.714 "length": 16384 00:08:38.714 }, 00:08:38.714 "queue_depth": 1024, 00:08:38.714 "io_size": 4096, 00:08:38.714 "runtime": 10.051491, 00:08:38.714 "iops": 11408.05876461512, 00:08:38.714 "mibps": 44.56272954927781, 00:08:38.714 "io_failed": 0, 00:08:38.714 "io_timeout": 0, 00:08:38.714 "avg_latency_us": 89458.0904910408, 00:08:38.714 "min_latency_us": 12615.68, 00:08:38.714 "max_latency_us": 76458.66666666667 00:08:38.714 } 00:08:38.714 ], 00:08:38.714 "core_count": 1 00:08:38.714 } 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2825453 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2825453 ']' 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2825453 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2825453 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2825453' 00:08:38.714 killing process with pid 2825453 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2825453 00:08:38.714 Received shutdown signal, test time was about 10.000000 seconds 00:08:38.714 00:08:38.714 Latency(us) 00:08:38.714 [2024-11-05T03:19:52.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.714 [2024-11-05T03:19:52.354Z] =================================================================================================================== 00:08:38.714 [2024-11-05T03:19:52.354Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:38.714 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2825453 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:38.976 rmmod nvme_tcp 00:08:38.976 rmmod nvme_fabrics 00:08:38.976 rmmod nvme_keyring 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2825171 ']' 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2825171 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2825171 ']' 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2825171 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2825171 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2825171' 00:08:38.976 killing process with pid 2825171 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2825171 00:08:38.976 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2825171 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.237 04:19:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.151 04:19:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:41.151 00:08:41.151 real 0m22.197s 00:08:41.151 user 0m25.858s 00:08:41.151 sys 0m6.667s 00:08:41.151 04:19:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.151 04:19:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:41.151 ************************************ 00:08:41.151 END TEST nvmf_queue_depth 00:08:41.151 ************************************ 00:08:41.411 04:19:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:41.411 04:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:41.411 04:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.411 04:19:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:41.411 ************************************ 00:08:41.411 START TEST nvmf_target_multipath 00:08:41.411 ************************************ 00:08:41.411 04:19:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:41.411 * Looking for test storage... 00:08:41.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:41.411 04:19:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:41.411 04:19:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:41.411 04:19:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:41.411 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:41.411 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:41.411 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:41.411 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:41.411 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:41.411 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:41.411 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:41.412 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.673 --rc genhtml_branch_coverage=1 00:08:41.673 --rc genhtml_function_coverage=1 00:08:41.673 --rc genhtml_legend=1 00:08:41.673 --rc geninfo_all_blocks=1 00:08:41.673 --rc geninfo_unexecuted_blocks=1 00:08:41.673 00:08:41.673 ' 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.673 --rc genhtml_branch_coverage=1 00:08:41.673 --rc genhtml_function_coverage=1 00:08:41.673 --rc genhtml_legend=1 00:08:41.673 --rc geninfo_all_blocks=1 00:08:41.673 --rc geninfo_unexecuted_blocks=1 00:08:41.673 00:08:41.673 ' 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.673 --rc genhtml_branch_coverage=1 00:08:41.673 --rc genhtml_function_coverage=1 00:08:41.673 --rc genhtml_legend=1 00:08:41.673 --rc geninfo_all_blocks=1 00:08:41.673 --rc geninfo_unexecuted_blocks=1 00:08:41.673 00:08:41.673 ' 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:41.673 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:41.673 --rc genhtml_branch_coverage=1 00:08:41.673 --rc genhtml_function_coverage=1 00:08:41.673 --rc genhtml_legend=1 00:08:41.673 --rc geninfo_all_blocks=1 00:08:41.673 --rc geninfo_unexecuted_blocks=1 00:08:41.673 00:08:41.673 ' 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:41.673 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:41.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:41.674 04:19:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:49.824 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.824 04:20:01 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:49.824 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:49.824 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:49.824 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.824 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:49.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:08:49.825 00:08:49.825 --- 10.0.0.2 ping statistics --- 00:08:49.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.825 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:08:49.825 00:08:49.825 --- 10.0.0.1 ping statistics --- 00:08:49.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.825 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:49.825 only one NIC for nvmf test 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:49.825 rmmod nvme_tcp 00:08:49.825 rmmod nvme_fabrics 00:08:49.825 rmmod nvme_keyring 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.825 04:20:02 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:51.214 00:08:51.214 real 0m9.664s 00:08:51.214 user 0m2.083s 00:08:51.214 sys 0m5.535s 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:51.214 ************************************ 00:08:51.214 END TEST nvmf_target_multipath 00:08:51.214 ************************************ 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.214 ************************************ 00:08:51.214 START TEST nvmf_zcopy 00:08:51.214 ************************************ 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:51.214 * Looking for test storage... 00:08:51.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.214 --rc genhtml_branch_coverage=1 00:08:51.214 --rc genhtml_function_coverage=1 00:08:51.214 --rc genhtml_legend=1 00:08:51.214 --rc geninfo_all_blocks=1 00:08:51.214 --rc geninfo_unexecuted_blocks=1 00:08:51.214 00:08:51.214 ' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.214 --rc genhtml_branch_coverage=1 00:08:51.214 --rc genhtml_function_coverage=1 00:08:51.214 --rc genhtml_legend=1 00:08:51.214 --rc geninfo_all_blocks=1 00:08:51.214 --rc geninfo_unexecuted_blocks=1 00:08:51.214 00:08:51.214 ' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.214 --rc genhtml_branch_coverage=1 00:08:51.214 --rc genhtml_function_coverage=1 00:08:51.214 --rc genhtml_legend=1 00:08:51.214 --rc geninfo_all_blocks=1 00:08:51.214 --rc geninfo_unexecuted_blocks=1 00:08:51.214 00:08:51.214 ' 00:08:51.214 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.214 --rc genhtml_branch_coverage=1 00:08:51.214 --rc genhtml_function_coverage=1 00:08:51.214 --rc genhtml_legend=1 00:08:51.214 --rc geninfo_all_blocks=1 00:08:51.215 --rc geninfo_unexecuted_blocks=1 00:08:51.215 00:08:51.215 ' 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.215 04:20:04 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:59.360 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:59.361 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:59.361 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:59.361 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:59.361 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:59.361 04:20:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:59.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:59.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:08:59.361 00:08:59.361 --- 10.0.0.2 ping statistics --- 00:08:59.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.361 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:59.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:59.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:08:59.361 00:08:59.361 --- 10.0.0.1 ping statistics --- 00:08:59.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:59.361 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2836126 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2836126 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:59.361 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2836126 ']' 00:08:59.362 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.362 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:59.362 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.362 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:59.362 04:20:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.362 [2024-11-05 04:20:12.215925] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:59.362 [2024-11-05 04:20:12.215992] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:59.362 [2024-11-05 04:20:12.314759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.362 [2024-11-05 04:20:12.364275] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:59.362 [2024-11-05 04:20:12.364334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:59.362 [2024-11-05 04:20:12.364342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.362 [2024-11-05 04:20:12.364350] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.362 [2024-11-05 04:20:12.364356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:59.362 [2024-11-05 04:20:12.365183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.625 [2024-11-05 04:20:13.079742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.625 [2024-11-05 04:20:13.096052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.625 malloc0 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:59.625 { 00:08:59.625 "params": { 00:08:59.625 "name": "Nvme$subsystem", 00:08:59.625 "trtype": "$TEST_TRANSPORT", 00:08:59.625 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:59.625 "adrfam": "ipv4", 00:08:59.625 "trsvcid": "$NVMF_PORT", 00:08:59.625 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:59.625 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:59.625 "hdgst": ${hdgst:-false}, 00:08:59.625 "ddgst": ${ddgst:-false} 00:08:59.625 }, 00:08:59.625 "method": "bdev_nvme_attach_controller" 00:08:59.625 } 00:08:59.625 EOF 00:08:59.625 )") 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:59.625 04:20:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:59.625 "params": { 00:08:59.625 "name": "Nvme1", 00:08:59.625 "trtype": "tcp", 00:08:59.625 "traddr": "10.0.0.2", 00:08:59.625 "adrfam": "ipv4", 00:08:59.625 "trsvcid": "4420", 00:08:59.625 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.625 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:59.625 "hdgst": false, 00:08:59.625 "ddgst": false 00:08:59.625 }, 00:08:59.625 "method": "bdev_nvme_attach_controller" 00:08:59.625 }' 00:08:59.625 [2024-11-05 04:20:13.184493] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:08:59.625 [2024-11-05 04:20:13.184561] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2836238 ] 00:08:59.625 [2024-11-05 04:20:13.260180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.887 [2024-11-05 04:20:13.301967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.887 Running I/O for 10 seconds... 00:09:02.214 6648.00 IOPS, 51.94 MiB/s [2024-11-05T03:20:16.795Z] 6697.50 IOPS, 52.32 MiB/s [2024-11-05T03:20:17.736Z] 7265.00 IOPS, 56.76 MiB/s [2024-11-05T03:20:18.680Z] 7879.25 IOPS, 61.56 MiB/s [2024-11-05T03:20:19.621Z] 8253.40 IOPS, 64.48 MiB/s [2024-11-05T03:20:20.566Z] 8502.33 IOPS, 66.42 MiB/s [2024-11-05T03:20:21.509Z] 8679.43 IOPS, 67.81 MiB/s [2024-11-05T03:20:22.895Z] 8811.38 IOPS, 68.84 MiB/s [2024-11-05T03:20:23.839Z] 8913.11 IOPS, 69.63 MiB/s [2024-11-05T03:20:23.839Z] 8999.10 IOPS, 70.31 MiB/s 00:09:10.199 Latency(us) 00:09:10.199 [2024-11-05T03:20:23.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.199 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:10.199 Verification LBA range: start 0x0 length 0x1000 00:09:10.199 Nvme1n1 : 10.05 8965.03 70.04 0.00 0.00 14179.03 2280.11 43690.67 00:09:10.199 [2024-11-05T03:20:23.839Z] =================================================================================================================== 00:09:10.199 [2024-11-05T03:20:23.839Z] Total : 8965.03 70.04 0.00 0.00 14179.03 2280.11 43690.67 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2838280 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:10.199 { 00:09:10.199 "params": { 00:09:10.199 "name": "Nvme$subsystem", 00:09:10.199 "trtype": "$TEST_TRANSPORT", 00:09:10.199 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.199 "adrfam": "ipv4", 00:09:10.199 "trsvcid": "$NVMF_PORT", 00:09:10.199 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.199 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.199 "hdgst": ${hdgst:-false}, 00:09:10.199 "ddgst": ${ddgst:-false} 00:09:10.199 }, 00:09:10.199 "method": "bdev_nvme_attach_controller" 00:09:10.199 } 00:09:10.199 EOF 00:09:10.199 )") 00:09:10.199 [2024-11-05 04:20:23.639457] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.639487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:10.199 [2024-11-05 04:20:23.647440] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.647450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:10.199 04:20:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:10.199 "params": { 00:09:10.199 "name": "Nvme1", 00:09:10.199 "trtype": "tcp", 00:09:10.199 "traddr": "10.0.0.2", 00:09:10.199 "adrfam": "ipv4", 00:09:10.199 "trsvcid": "4420", 00:09:10.199 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:10.199 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:10.199 "hdgst": false, 00:09:10.199 "ddgst": false 00:09:10.199 }, 00:09:10.199 "method": "bdev_nvme_attach_controller" 00:09:10.199 }' 00:09:10.199 [2024-11-05 04:20:23.655459] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.655467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.663479] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.663488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.671499] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.671507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.679519] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.679528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.687540] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.687548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.693269] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:10.199 [2024-11-05 04:20:23.693368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2838280 ] 00:09:10.199 [2024-11-05 04:20:23.695561] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.695570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.703583] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.703591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.711602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.711610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.719623] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.719631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.727644] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.727654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.735663] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.735671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.743683] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.199 [2024-11-05 04:20:23.743692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.199 [2024-11-05 04:20:23.751705] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.751713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.759724] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.759733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.765627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.200 [2024-11-05 04:20:23.767744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.767757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.775768] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.775778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.783790] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.783800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.791807] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.791818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.799828] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.799838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.801354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.200 [2024-11-05 04:20:23.807848] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.807857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.815874] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.815885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.823893] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.823908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.200 [2024-11-05 04:20:23.831911] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.200 [2024-11-05 04:20:23.831922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.839929] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.839939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.847952] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.847961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.855971] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.855979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.863991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.863999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.872014] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.872025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.880045] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.880063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.888058] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.888069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.896078] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.896088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.904099] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.904110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.912117] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.912127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.920140] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.920151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.461 [2024-11-05 04:20:23.928160] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.461 [2024-11-05 04:20:23.928169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:23.936909] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:23.936926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:23.944235] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:23.944247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 Running I/O for 5 seconds... 00:09:10.462 [2024-11-05 04:20:23.952252] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:23.952260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:23.964249] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:23.964266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:23.972230] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:23.972246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:23.980881] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:23.980897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:23.989720] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:23.989736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:23.998212] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:23.998232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.006476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.006492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.015586] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.015602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.024210] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.024226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.033404] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.033419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.042093] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.042108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.050570] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.050585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.058908] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.058923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.067679] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.067695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.076636] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.076651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.085689] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.085705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.462 [2024-11-05 04:20:24.094885] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.462 [2024-11-05 04:20:24.094901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.102766] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.102782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.112150] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.112165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.120759] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.120774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.129559] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.129574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.138564] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.138579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.147147] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.147162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.156008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.156023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.165060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.165079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.173967] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.173982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.182757] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.182773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.191722] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.191738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.200942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.200957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.209673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.209688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.218506] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.218520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.227303] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.227318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.235985] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.236001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.245024] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.245040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.254167] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.254183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.263176] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.263192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.723 [2024-11-05 04:20:24.271756] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.723 [2024-11-05 04:20:24.271771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.280725] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.280740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.289505] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.289521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.298876] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.298891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.307313] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.307328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.316032] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.316047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.325037] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.325052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.333724] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.333753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.342360] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.342376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.351498] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.351513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.724 [2024-11-05 04:20:24.360144] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.724 [2024-11-05 04:20:24.360159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.368864] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.368879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.377635] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.377650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.386277] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.386293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.395559] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.395574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.403632] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.403649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.412547] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.412562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.421339] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.421354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.430505] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.430520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.439345] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.439360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.448464] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.448479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.457033] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.457047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.466205] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.466219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.474691] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.474705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.483448] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.483462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.491886] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.491901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.500676] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.500693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.509680] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.509694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.518450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.518465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.527566] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.527581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.536095] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.536110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.545097] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.545112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.553750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.553765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.563063] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.563077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.571786] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.571801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.580453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.580467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.589071] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.589086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.597885] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.597900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.606770] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.606785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:10.985 [2024-11-05 04:20:24.615968] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:10.985 [2024-11-05 04:20:24.615982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.624423] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.624439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.633142] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.633158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.642042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.642056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.650787] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.650802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.659990] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.660005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.669060] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.669079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.677892] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.677907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.686720] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.686735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.695161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.695176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.703700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.703715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.712882] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.712896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.722043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.722057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.731197] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.731211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.740139] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.740154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.748622] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.748636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.757780] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.757795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.766380] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.766395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.775235] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.775251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.783986] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.784001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.792511] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.792525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.801356] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.801371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.810331] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.810346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.819435] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.819450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.828297] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.828312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.836896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.836911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.845733] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.845753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.854520] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.854535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.863030] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.863045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.871566] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.871581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.247 [2024-11-05 04:20:24.880810] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.247 [2024-11-05 04:20:24.880825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.508 [2024-11-05 04:20:24.889560] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.508 [2024-11-05 04:20:24.889576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.508 [2024-11-05 04:20:24.898392] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.508 [2024-11-05 04:20:24.898406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.508 [2024-11-05 04:20:24.906913] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.508 [2024-11-05 04:20:24.906928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.508 [2024-11-05 04:20:24.915687] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.508 [2024-11-05 04:20:24.915701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.508 [2024-11-05 04:20:24.924322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.508 [2024-11-05 04:20:24.924337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.508 [2024-11-05 04:20:24.933291] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.508 [2024-11-05 04:20:24.933306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.508 [2024-11-05 04:20:24.942319] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.508 [2024-11-05 04:20:24.942334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.508 [2024-11-05 04:20:24.951633] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.508 [2024-11-05 04:20:24.951648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 18990.00 IOPS, 148.36 MiB/s [2024-11-05T03:20:25.149Z] [2024-11-05 04:20:24.960155] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:24.960169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:24.969336] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:24.969351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:24.977846] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:24.977860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:24.986887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:24.986901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:24.996056] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:24.996071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.004427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.004442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.013637] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.013652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.022472] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.022486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.031612] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.031626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.040278] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.040293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.048693] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.048707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.057565] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.057579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.066430] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.066445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.075658] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.075672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.084804] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.084819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.093255] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.093270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.102008] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.102022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.109946] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.109960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.119014] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.119029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.127864] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.127879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.136492] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.136506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.509 [2024-11-05 04:20:25.145574] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.509 [2024-11-05 04:20:25.145590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.154294] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.154310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.162720] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.162738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.171259] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.171273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.179873] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.179887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.188417] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.188431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.197337] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.197352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.206236] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.206251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.214842] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.214857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.223887] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.223902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.231868] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.231883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.241002] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.241018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.249422] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.249438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.258419] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.258434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.267609] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.267624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.276143] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.276157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.284813] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.284827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.293506] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.293521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.302651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.302666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.311684] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.311699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.320196] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.320211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.328975] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.328994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.337560] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.337575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.346383] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.346398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.355277] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.355293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.363765] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.363780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.372854] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.372869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.381364] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.381380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.390167] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.390182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.398978] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.398993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:11.770 [2024-11-05 04:20:25.407563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:11.770 [2024-11-05 04:20:25.407579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.416553] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.416568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.425440] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.425455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.434113] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.434128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.443378] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.443393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.451867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.451882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.461140] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.461156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.469707] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.469722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.478302] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.478317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.487471] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.487486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.496541] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.496561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.505755] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.505770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.514595] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.514610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.523628] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.523643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.531465] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.531480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.540358] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.540373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.549125] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.549140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.557936] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.557951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.566835] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.566850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.575381] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.575396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.583875] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.583890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.592454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.592469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.601165] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.601180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.610175] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.610190] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.618254] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.618269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.626577] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.626592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.635548] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.635562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.644282] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.644296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.652737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.652757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.661218] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.661236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.032 [2024-11-05 04:20:25.669923] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.032 [2024-11-05 04:20:25.669938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.678356] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.678372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.687000] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.687015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.696194] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.696209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.705321] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.705337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.714358] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.714373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.722794] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.722809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.731258] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.731272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.739883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.739898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.748915] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.748930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.758028] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.758043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.766474] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.766489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.775042] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.775057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.784087] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.784103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.793099] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.793115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.802269] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.802284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.811198] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.811214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.820027] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.820042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.828812] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.828832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.837883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.837899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.846665] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.846682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.855402] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.294 [2024-11-05 04:20:25.855417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.294 [2024-11-05 04:20:25.864265] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.295 [2024-11-05 04:20:25.864280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.295 [2024-11-05 04:20:25.873013] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.295 [2024-11-05 04:20:25.873028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.295 [2024-11-05 04:20:25.881783] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.295 [2024-11-05 04:20:25.881799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.295 [2024-11-05 04:20:25.891072] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.295 [2024-11-05 04:20:25.891087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.295 [2024-11-05 04:20:25.899637] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.295 [2024-11-05 04:20:25.899651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.295 [2024-11-05 04:20:25.908497] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.295 [2024-11-05 04:20:25.908511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.295 [2024-11-05 04:20:25.917072] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.295 [2024-11-05 04:20:25.917087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.295 [2024-11-05 04:20:25.926074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.295 [2024-11-05 04:20:25.926089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:25.935083] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:25.935098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:25.943878] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:25.943892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:25.952651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:25.952665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 19054.50 IOPS, 148.86 MiB/s [2024-11-05T03:20:26.197Z] [2024-11-05 04:20:25.959161] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:25.959175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:25.969383] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:25.969397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:25.978450] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:25.978465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:25.987261] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:25.987276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:25.995793] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:25.995808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.004301] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.004315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.012825] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.012841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.021658] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.021673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.030210] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.030225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.039307] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.039322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.047883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.047898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.056563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.056578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.065237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.065252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.074209] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.074224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.082499] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.082513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.091590] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.091605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.100565] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.100580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.109081] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.109096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.117998] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.118013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.126431] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.126445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.135628] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.135642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.144826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.144840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.153529] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.153543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.162316] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.162331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.171205] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.171219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.180591] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.180606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.557 [2024-11-05 04:20:26.189275] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.557 [2024-11-05 04:20:26.189290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.197929] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.197943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.206719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.206733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.215206] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.215221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.223772] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.223787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.232572] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.232587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.241454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.241470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.250351] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.250368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.258973] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.258990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.267939] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.267954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.275890] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.275905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.284826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.284841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.294005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.294020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.818 [2024-11-05 04:20:26.302539] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.818 [2024-11-05 04:20:26.302554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.311751] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.311765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.320371] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.320389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.329482] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.329497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.337768] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.337783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.346385] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.346400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.355154] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.355169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.363883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.363898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.372928] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.372943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.381530] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.381545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.390800] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.390815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.399313] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.399327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.408343] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.408358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.417366] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.417380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.425993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.426008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.435116] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.435131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.443772] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.443787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:12.819 [2024-11-05 04:20:26.452634] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:12.819 [2024-11-05 04:20:26.452649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.461700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.461715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.470955] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.470969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.479442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.479457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.488577] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.488596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.497762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.497777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.506310] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.506324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.515074] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.515089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.523669] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.523684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.532283] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.532298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.541322] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.541337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.549990] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.550004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.559222] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.559237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.568392] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.568407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.577736] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.577756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.586918] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.586933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.596159] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.596174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.604519] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.604533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.613057] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.613071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.622335] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.622350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.630968] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.630983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.639815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.639830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.648835] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.648850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.657330] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.657348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.666023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.666038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.674784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.674798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.682785] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.682799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.691759] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.691773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.700939] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.700954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.709787] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.709801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.081 [2024-11-05 04:20:26.718583] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.081 [2024-11-05 04:20:26.718598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.727036] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.727051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.735821] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.735835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.744706] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.744721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.753068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.753083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.761815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.761830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.770406] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.770421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.779646] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.779660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.788054] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.788069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.796925] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.796940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.805705] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.805719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.814428] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.814442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.823193] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.823214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.832289] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.832304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.840827] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.840842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.849569] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.849584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.858123] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.858138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.867149] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.867163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.875889] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.875904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.343 [2024-11-05 04:20:26.884890] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.343 [2024-11-05 04:20:26.884905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.893422] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.893436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.902361] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.902376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.911113] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.911128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.919794] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.919809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.928933] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.928949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.938035] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.938050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.945944] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.945960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.954728] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.954744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 19079.33 IOPS, 149.06 MiB/s [2024-11-05T03:20:26.984Z] [2024-11-05 04:20:26.962799] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.962816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.971578] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.971593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.344 [2024-11-05 04:20:26.980250] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.344 [2024-11-05 04:20:26.980266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:26.989255] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:26.989271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:26.998139] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:26.998154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.007043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.007059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.015641] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.015655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.024651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.024666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.033421] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.033436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.042111] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.042125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.051374] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.051389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.059673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.059688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.068481] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.068496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.077089] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.077104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.605 [2024-11-05 04:20:27.086069] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.605 [2024-11-05 04:20:27.086084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.095203] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.095218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.104411] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.104427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.113312] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.113327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.122520] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.122535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.130896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.130911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.139802] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.139817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.148206] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.148221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.157059] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.157074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.166292] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.166307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.174823] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.174838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.183662] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.183677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.192079] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.192094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.201390] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.201405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.209331] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.209346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.218692] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.218707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.227963] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.227979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.606 [2024-11-05 04:20:27.236950] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.606 [2024-11-05 04:20:27.236965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.246018] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.246033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.254183] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.254199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.262542] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.262557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.271340] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.271355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.280047] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.280062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.288597] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.288611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.297796] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.297811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.306194] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.306208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.314976] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.314991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.324081] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.324097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.332338] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.332353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.341332] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.341347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.350240] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.350256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.359270] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.359285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.368088] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.368103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.376936] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.376951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.386020] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.386035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.394621] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.394636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.403900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.403915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.412596] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.412611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.421559] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.421574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.430470] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.430485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.439262] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.439277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.448427] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.448442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.457627] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.457642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.466171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.466186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.475352] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.475367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.483888] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.483907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.492634] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.492649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:13.867 [2024-11-05 04:20:27.501356] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:13.867 [2024-11-05 04:20:27.501371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.510420] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.510436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.519447] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.519463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.527555] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.527571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.536367] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.536382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.544955] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.544970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.553520] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.553535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.562573] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.562588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.571101] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.571116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.580437] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.580452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.588943] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.588958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.597736] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.597758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.606459] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.606474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.615329] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.615343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.624206] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.624221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.633181] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.633196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.641990] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.642005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.649964] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.649982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.659338] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.659353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.667216] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.667230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.675991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.676006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.684838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.684853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.693734] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.693753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.702808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.702823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.711347] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.711362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.720283] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.720298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.728721] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.728737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.737997] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.738012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.746624] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.746638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.755493] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.755509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.129 [2024-11-05 04:20:27.764249] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.129 [2024-11-05 04:20:27.764265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.389 [2024-11-05 04:20:27.772804] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.389 [2024-11-05 04:20:27.772820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.389 [2024-11-05 04:20:27.781935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.389 [2024-11-05 04:20:27.781949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.389 [2024-11-05 04:20:27.790455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.389 [2024-11-05 04:20:27.790470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.799029] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.799044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.807883] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.807897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.816370] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.816388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.824588] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.824603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.833458] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.833472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.842189] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.842204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.851286] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.851301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.860032] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.860047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.869013] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.869028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.877674] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.877689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.886166] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.886180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.894704] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.894718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.903015] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.903029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.911904] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.911919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.925065] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.925081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.933517] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.933532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.942516] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.942530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.951357] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.951372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 19101.00 IOPS, 149.23 MiB/s [2024-11-05T03:20:28.030Z] [2024-11-05 04:20:27.961198] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.961212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.970088] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.970103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.978892] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.978907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.987421] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.987436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:27.996334] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:27.996349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:28.005000] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:28.005014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:28.013664] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:28.013679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.390 [2024-11-05 04:20:28.022440] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.390 [2024-11-05 04:20:28.022455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.031056] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.031071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.039700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.039715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.049050] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.049065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.057714] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.057728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.065929] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.065944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.074846] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.074861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.083445] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.083460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.092697] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.092712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.101736] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.101755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.110607] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.110622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.119700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.119715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.128793] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.128807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.137487] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.137501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.146461] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.146476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.155529] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.155543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.164409] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.164423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.173178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.173192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.182254] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.182268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.190189] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.190204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.199190] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.199205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.208277] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.208292] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.217430] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.217445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.225966] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.225980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.234676] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.234691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.243455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.243470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.252376] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.252391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.261594] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.261610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.651 [2024-11-05 04:20:28.270476] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.651 [2024-11-05 04:20:28.270492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.652 [2024-11-05 04:20:28.279109] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.652 [2024-11-05 04:20:28.279123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.652 [2024-11-05 04:20:28.287679] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.652 [2024-11-05 04:20:28.287694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.296191] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.296206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.304820] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.304835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.313668] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.313682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.322855] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.322870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.331387] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.331401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.340593] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.340608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.349588] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.349603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.358146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.358161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.366906] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.366920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.375808] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.375822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.384656] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.384672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.393346] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.393361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.402512] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.402527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.411043] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.411058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.420324] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.420339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.428889] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.428903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.437921] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.437935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.446935] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.446949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.913 [2024-11-05 04:20:28.455928] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.913 [2024-11-05 04:20:28.455943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.464896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.464910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.473827] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.473842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.482561] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.482579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.491737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.491756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.500769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.500783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.509555] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.509569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.518750] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.518765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.527411] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.527425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.536199] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.536213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:14.914 [2024-11-05 04:20:28.545267] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:14.914 [2024-11-05 04:20:28.545281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.553980] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.553994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.563069] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.563083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.571581] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.571596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.580348] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.580363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.589623] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.589638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.598089] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.598104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.606461] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.606476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.615572] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.615587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.624878] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.624893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.633656] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.633671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.642273] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.642288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.650299] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.650318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.659132] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.659147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.668164] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.668179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.676582] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.676596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.685102] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.685117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.693994] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.694010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.702454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.702468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.710962] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.710977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.719563] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.719578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.728467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.728483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.737524] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.737539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.746837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.746852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.755309] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.755324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.764252] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.764268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.773193] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.773208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.781944] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.781959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.790651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.790665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.799532] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.799547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.176 [2024-11-05 04:20:28.808668] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.176 [2024-11-05 04:20:28.808682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.817303] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.817323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.826495] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.826511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.835313] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.835328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.844146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.844162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.853424] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.853439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.862354] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.862369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.871468] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.871483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.880584] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.880599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.889798] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.889813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.898266] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.898281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.907029] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.907045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.916067] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.916082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.924775] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.924790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.933915] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.933930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.942642] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.942657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.951531] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.951546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.960440] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.960456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 19120.80 IOPS, 149.38 MiB/s [2024-11-05T03:20:29.078Z] [2024-11-05 04:20:28.966641] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.966655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 00:09:15.438 Latency(us) 00:09:15.438 [2024-11-05T03:20:29.078Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:15.438 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:15.438 Nvme1n1 : 5.01 19124.09 149.41 0.00 0.00 6687.09 2498.56 18568.53 00:09:15.438 [2024-11-05T03:20:29.078Z] =================================================================================================================== 00:09:15.438 [2024-11-05T03:20:29.078Z] Total : 19124.09 149.41 0.00 0.00 6687.09 2498.56 18568.53 00:09:15.438 [2024-11-05 04:20:28.974657] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.974669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.982676] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.982688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.990700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.990712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:28.998719] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:28.998732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.438 [2024-11-05 04:20:29.006738] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.438 [2024-11-05 04:20:29.006752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.439 [2024-11-05 04:20:29.014762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.439 [2024-11-05 04:20:29.014772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.439 [2024-11-05 04:20:29.022792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.439 [2024-11-05 04:20:29.022802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.439 [2024-11-05 04:20:29.030799] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.439 [2024-11-05 04:20:29.030807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.439 [2024-11-05 04:20:29.038818] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.439 [2024-11-05 04:20:29.038826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.439 [2024-11-05 04:20:29.046837] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.439 [2024-11-05 04:20:29.046845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.439 [2024-11-05 04:20:29.054859] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.439 [2024-11-05 04:20:29.054868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.439 [2024-11-05 04:20:29.062881] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.439 [2024-11-05 04:20:29.062890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.439 [2024-11-05 04:20:29.070900] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.439 [2024-11-05 04:20:29.070908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.699 [2024-11-05 04:20:29.078921] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:15.699 [2024-11-05 04:20:29.078929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2838280) - No such process 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2838280 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.699 delay0 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.699 04:20:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:15.699 [2024-11-05 04:20:29.217139] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:23.920 [2024-11-05 04:20:36.167127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c6d0 is same with the state(6) to be set 00:09:23.920 [2024-11-05 04:20:36.167166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c6d0 is same with the state(6) to be set 00:09:23.920 [2024-11-05 04:20:36.167172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8c6d0 is same with the state(6) to be set 00:09:23.920 Initializing NVMe Controllers 00:09:23.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:23.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:23.920 Initialization complete. Launching workers. 00:09:23.920 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 6879 00:09:23.920 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7162, failed to submit 37 00:09:23.920 success 6979, unsuccessful 183, failed 0 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.920 rmmod nvme_tcp 00:09:23.920 rmmod nvme_fabrics 00:09:23.920 rmmod nvme_keyring 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2836126 ']' 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2836126 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2836126 ']' 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2836126 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2836126 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2836126' 00:09:23.920 killing process with pid 2836126 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2836126 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2836126 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.920 04:20:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:24.865 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:24.865 00:09:24.865 real 0m33.888s 00:09:24.865 user 0m45.801s 00:09:24.865 sys 0m11.052s 00:09:24.865 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:24.865 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.865 ************************************ 00:09:24.865 END TEST nvmf_zcopy 00:09:24.865 ************************************ 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.127 ************************************ 00:09:25.127 START TEST nvmf_nmic 00:09:25.127 ************************************ 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:25.127 * Looking for test storage... 00:09:25.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:25.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.127 --rc genhtml_branch_coverage=1 00:09:25.127 --rc genhtml_function_coverage=1 00:09:25.127 --rc genhtml_legend=1 00:09:25.127 --rc geninfo_all_blocks=1 00:09:25.127 --rc geninfo_unexecuted_blocks=1 00:09:25.127 00:09:25.127 ' 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:25.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.127 --rc genhtml_branch_coverage=1 00:09:25.127 --rc genhtml_function_coverage=1 00:09:25.127 --rc genhtml_legend=1 00:09:25.127 --rc geninfo_all_blocks=1 00:09:25.127 --rc geninfo_unexecuted_blocks=1 00:09:25.127 00:09:25.127 ' 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:25.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.127 --rc genhtml_branch_coverage=1 00:09:25.127 --rc genhtml_function_coverage=1 00:09:25.127 --rc genhtml_legend=1 00:09:25.127 --rc geninfo_all_blocks=1 00:09:25.127 --rc geninfo_unexecuted_blocks=1 00:09:25.127 00:09:25.127 ' 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:25.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.127 --rc genhtml_branch_coverage=1 00:09:25.127 --rc genhtml_function_coverage=1 00:09:25.127 --rc genhtml_legend=1 00:09:25.127 --rc geninfo_all_blocks=1 00:09:25.127 --rc geninfo_unexecuted_blocks=1 00:09:25.127 00:09:25.127 ' 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.127 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.389 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.389 04:20:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:33.533 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:33.534 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:33.534 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:33.534 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:33.534 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:33.534 04:20:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:33.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:33.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.676 ms 00:09:33.534 00:09:33.534 --- 10.0.0.2 ping statistics --- 00:09:33.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.534 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:33.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:33.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:09:33.534 00:09:33.534 --- 10.0.0.1 ping statistics --- 00:09:33.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:33.534 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2845095 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2845095 00:09:33.534 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:33.535 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2845095 ']' 00:09:33.535 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.535 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:33.535 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.535 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:33.535 04:20:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 [2024-11-05 04:20:46.190917] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:33.535 [2024-11-05 04:20:46.190984] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:33.535 [2024-11-05 04:20:46.276084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:33.535 [2024-11-05 04:20:46.319299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:33.535 [2024-11-05 04:20:46.319341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:33.535 [2024-11-05 04:20:46.319350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:33.535 [2024-11-05 04:20:46.319357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:33.535 [2024-11-05 04:20:46.319362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:33.535 [2024-11-05 04:20:46.320969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.535 [2024-11-05 04:20:46.321086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.535 [2024-11-05 04:20:46.321243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.535 [2024-11-05 04:20:46.321244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 [2024-11-05 04:20:47.052149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 Malloc0 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 [2024-11-05 04:20:47.131195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:33.535 test case1: single bdev can't be used in multiple subsystems 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.535 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.535 [2024-11-05 04:20:47.167097] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:33.535 [2024-11-05 04:20:47.167118] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:33.535 [2024-11-05 04:20:47.167126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:33.535 request: 00:09:33.535 { 00:09:33.796 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:33.796 "namespace": { 00:09:33.796 "bdev_name": "Malloc0", 00:09:33.796 "no_auto_visible": false 00:09:33.796 }, 00:09:33.796 "method": "nvmf_subsystem_add_ns", 00:09:33.796 "req_id": 1 00:09:33.796 } 00:09:33.796 Got JSON-RPC error response 00:09:33.796 response: 00:09:33.796 { 00:09:33.796 "code": -32602, 00:09:33.796 "message": "Invalid parameters" 00:09:33.796 } 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:33.796 Adding namespace failed - expected result. 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:33.796 test case2: host connect to nvmf target in multiple paths 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:33.796 [2024-11-05 04:20:47.179242] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:33.796 04:20:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.182 04:20:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:37.096 04:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:37.096 04:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:09:37.096 04:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.096 04:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:09:37.096 04:20:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:09:39.037 04:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:39.037 04:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:39.037 04:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.037 04:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:09:39.037 04:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.037 04:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:09:39.037 04:20:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:39.037 [global] 00:09:39.037 thread=1 00:09:39.037 invalidate=1 00:09:39.037 rw=write 00:09:39.037 time_based=1 00:09:39.037 runtime=1 00:09:39.037 ioengine=libaio 00:09:39.037 direct=1 00:09:39.037 bs=4096 00:09:39.037 iodepth=1 00:09:39.037 norandommap=0 00:09:39.037 numjobs=1 00:09:39.037 00:09:39.037 verify_dump=1 00:09:39.037 verify_backlog=512 00:09:39.037 verify_state_save=0 00:09:39.037 do_verify=1 00:09:39.037 verify=crc32c-intel 00:09:39.037 [job0] 00:09:39.037 filename=/dev/nvme0n1 00:09:39.037 Could not set queue depth (nvme0n1) 00:09:39.303 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:39.303 fio-3.35 00:09:39.303 Starting 1 thread 00:09:40.688 00:09:40.688 job0: (groupid=0, jobs=1): err= 0: pid=2846512: Tue Nov 5 04:20:53 2024 00:09:40.688 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:40.688 slat (nsec): min=25543, max=61650, avg=26365.47, stdev=2114.77 00:09:40.688 clat (usec): min=548, max=1171, avg=974.08, stdev=65.30 00:09:40.688 lat (usec): min=574, max=1197, avg=1000.45, stdev=65.30 00:09:40.688 clat percentiles (usec): 00:09:40.688 | 1.00th=[ 783], 5.00th=[ 848], 10.00th=[ 889], 20.00th=[ 938], 00:09:40.688 | 30.00th=[ 963], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:09:40.688 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1045], 95.00th=[ 1057], 00:09:40.688 | 99.00th=[ 1106], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1172], 00:09:40.688 | 99.99th=[ 1172] 00:09:40.688 write: IOPS=750, BW=3001KiB/s (3073kB/s)(3004KiB/1001msec); 0 zone resets 00:09:40.688 slat (usec): min=9, max=25047, avg=61.94, stdev=913.00 00:09:40.688 clat (usec): min=211, max=770, avg=575.11, stdev=103.12 00:09:40.688 lat (usec): min=221, max=25757, avg=637.05, stdev=924.13 00:09:40.688 clat percentiles (usec): 00:09:40.688 | 1.00th=[ 330], 5.00th=[ 379], 10.00th=[ 433], 20.00th=[ 482], 00:09:40.688 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 611], 00:09:40.688 | 70.00th=[ 644], 80.00th=[ 676], 90.00th=[ 701], 95.00th=[ 717], 00:09:40.688 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 775], 99.95th=[ 775], 00:09:40.688 | 99.99th=[ 775] 00:09:40.688 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:40.688 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:40.688 lat (usec) : 250=0.24%, 500=13.70%, 750=45.29%, 1000=27.47% 00:09:40.688 lat (msec) : 2=13.30% 00:09:40.688 cpu : usr=1.80%, sys=3.70%, ctx=1267, majf=0, minf=1 00:09:40.688 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:40.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:40.688 issued rwts: total=512,751,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:40.688 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:40.688 00:09:40.688 Run status group 0 (all jobs): 00:09:40.688 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:09:40.688 WRITE: bw=3001KiB/s (3073kB/s), 3001KiB/s-3001KiB/s (3073kB/s-3073kB/s), io=3004KiB (3076kB), run=1001-1001msec 00:09:40.688 00:09:40.688 Disk stats (read/write): 00:09:40.688 nvme0n1: ios=565/586, merge=0/0, ticks=1502/324, in_queue=1826, util=98.60% 00:09:40.688 04:20:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:40.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:40.688 rmmod nvme_tcp 00:09:40.688 rmmod nvme_fabrics 00:09:40.688 rmmod nvme_keyring 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:40.688 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2845095 ']' 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2845095 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2845095 ']' 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2845095 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2845095 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2845095' 00:09:40.689 killing process with pid 2845095 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2845095 00:09:40.689 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2845095 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:40.950 04:20:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.863 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:42.863 00:09:42.863 real 0m17.883s 00:09:42.863 user 0m50.200s 00:09:42.863 sys 0m6.409s 00:09:42.863 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.863 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:42.863 ************************************ 00:09:42.863 END TEST nvmf_nmic 00:09:42.863 ************************************ 00:09:42.863 04:20:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:42.863 04:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:42.863 04:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.863 04:20:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:43.126 ************************************ 00:09:43.126 START TEST nvmf_fio_target 00:09:43.126 ************************************ 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:43.126 * Looking for test storage... 00:09:43.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:43.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.126 --rc genhtml_branch_coverage=1 00:09:43.126 --rc genhtml_function_coverage=1 00:09:43.126 --rc genhtml_legend=1 00:09:43.126 --rc geninfo_all_blocks=1 00:09:43.126 --rc geninfo_unexecuted_blocks=1 00:09:43.126 00:09:43.126 ' 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:43.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.126 --rc genhtml_branch_coverage=1 00:09:43.126 --rc genhtml_function_coverage=1 00:09:43.126 --rc genhtml_legend=1 00:09:43.126 --rc geninfo_all_blocks=1 00:09:43.126 --rc geninfo_unexecuted_blocks=1 00:09:43.126 00:09:43.126 ' 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:43.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.126 --rc genhtml_branch_coverage=1 00:09:43.126 --rc genhtml_function_coverage=1 00:09:43.126 --rc genhtml_legend=1 00:09:43.126 --rc geninfo_all_blocks=1 00:09:43.126 --rc geninfo_unexecuted_blocks=1 00:09:43.126 00:09:43.126 ' 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:43.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:43.126 --rc genhtml_branch_coverage=1 00:09:43.126 --rc genhtml_function_coverage=1 00:09:43.126 --rc genhtml_legend=1 00:09:43.126 --rc geninfo_all_blocks=1 00:09:43.126 --rc geninfo_unexecuted_blocks=1 00:09:43.126 00:09:43.126 ' 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.126 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:43.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:43.127 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:43.388 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:43.388 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:43.388 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:43.388 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:43.388 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:43.389 04:20:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:49.980 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:49.980 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:49.980 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:49.980 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.980 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:50.241 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:50.241 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.662 ms 00:09:50.241 00:09:50.241 --- 10.0.0.2 ping statistics --- 00:09:50.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.241 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:50.241 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:50.241 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:09:50.241 00:09:50.241 --- 10.0.0.1 ping statistics --- 00:09:50.241 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:50.241 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:09:50.241 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2851202 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2851202 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2851202 ']' 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:50.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:50.242 04:21:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.242 [2024-11-05 04:21:03.869519] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:09:50.242 [2024-11-05 04:21:03.869588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.503 [2024-11-05 04:21:03.951502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.503 [2024-11-05 04:21:03.993169] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.503 [2024-11-05 04:21:03.993206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.503 [2024-11-05 04:21:03.993214] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.503 [2024-11-05 04:21:03.993226] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.503 [2024-11-05 04:21:03.993232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.503 [2024-11-05 04:21:03.994788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.503 [2024-11-05 04:21:03.995008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.503 [2024-11-05 04:21:03.995009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.503 [2024-11-05 04:21:03.994867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:51.074 04:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:51.074 04:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:09:51.074 04:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:51.074 04:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:51.074 04:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.334 04:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.335 04:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:51.335 [2024-11-05 04:21:04.870143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:51.335 04:21:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.617 04:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:51.617 04:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.877 04:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:51.877 04:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:51.877 04:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:51.877 04:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.137 04:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:52.138 04:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:52.399 04:21:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.659 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:52.659 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.659 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:52.660 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:52.920 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:52.920 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:53.180 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:53.180 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:53.180 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:53.441 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:53.441 04:21:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:53.702 04:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.702 [2024-11-05 04:21:07.326099] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.962 04:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:53.962 04:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:54.223 04:21:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:55.607 04:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:55.607 04:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:55.607 04:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.607 04:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:55.607 04:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:55.607 04:21:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:58.158 04:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:58.158 04:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:58.158 04:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:58.159 04:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:58.159 04:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:58.159 04:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:58.159 04:21:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:58.159 [global] 00:09:58.159 thread=1 00:09:58.159 invalidate=1 00:09:58.159 rw=write 00:09:58.159 time_based=1 00:09:58.159 runtime=1 00:09:58.159 ioengine=libaio 00:09:58.159 direct=1 00:09:58.159 bs=4096 00:09:58.159 iodepth=1 00:09:58.159 norandommap=0 00:09:58.159 numjobs=1 00:09:58.159 00:09:58.159 verify_dump=1 00:09:58.159 verify_backlog=512 00:09:58.159 verify_state_save=0 00:09:58.159 do_verify=1 00:09:58.159 verify=crc32c-intel 00:09:58.159 [job0] 00:09:58.159 filename=/dev/nvme0n1 00:09:58.159 [job1] 00:09:58.159 filename=/dev/nvme0n2 00:09:58.159 [job2] 00:09:58.159 filename=/dev/nvme0n3 00:09:58.159 [job3] 00:09:58.159 filename=/dev/nvme0n4 00:09:58.159 Could not set queue depth (nvme0n1) 00:09:58.159 Could not set queue depth (nvme0n2) 00:09:58.159 Could not set queue depth (nvme0n3) 00:09:58.159 Could not set queue depth (nvme0n4) 00:09:58.159 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.159 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.159 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.159 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:58.159 fio-3.35 00:09:58.159 Starting 4 threads 00:09:59.544 00:09:59.544 job0: (groupid=0, jobs=1): err= 0: pid=2853326: Tue Nov 5 04:21:12 2024 00:09:59.544 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:59.544 slat (nsec): min=6702, max=60198, avg=27274.39, stdev=3825.95 00:09:59.544 clat (usec): min=610, max=1254, avg=993.91, stdev=87.18 00:09:59.544 lat (usec): min=619, max=1281, avg=1021.19, stdev=88.01 00:09:59.544 clat percentiles (usec): 00:09:59.544 | 1.00th=[ 758], 5.00th=[ 857], 10.00th=[ 898], 20.00th=[ 930], 00:09:59.544 | 30.00th=[ 955], 40.00th=[ 971], 50.00th=[ 996], 60.00th=[ 1012], 00:09:59.544 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1106], 95.00th=[ 1123], 00:09:59.544 | 99.00th=[ 1221], 99.50th=[ 1254], 99.90th=[ 1254], 99.95th=[ 1254], 00:09:59.544 | 99.99th=[ 1254] 00:09:59.544 write: IOPS=709, BW=2837KiB/s (2905kB/s)(2840KiB/1001msec); 0 zone resets 00:09:59.544 slat (nsec): min=9207, max=72239, avg=31791.94, stdev=9310.88 00:09:59.544 clat (usec): min=250, max=997, avg=626.54, stdev=118.23 00:09:59.544 lat (usec): min=283, max=1046, avg=658.33, stdev=121.82 00:09:59.544 clat percentiles (usec): 00:09:59.544 | 1.00th=[ 343], 5.00th=[ 416], 10.00th=[ 469], 20.00th=[ 529], 00:09:59.544 | 30.00th=[ 570], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 668], 00:09:59.544 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 766], 95.00th=[ 807], 00:09:59.544 | 99.00th=[ 865], 99.50th=[ 914], 99.90th=[ 996], 99.95th=[ 996], 00:09:59.544 | 99.99th=[ 996] 00:09:59.544 bw ( KiB/s): min= 4096, max= 4096, per=36.15%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.544 lat (usec) : 500=8.51%, 750=41.49%, 1000=30.85% 00:09:59.544 lat (msec) : 2=19.15% 00:09:59.544 cpu : usr=3.50%, sys=4.00%, ctx=1223, majf=0, minf=1 00:09:59.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.544 issued rwts: total=512,710,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.544 job1: (groupid=0, jobs=1): err= 0: pid=2853334: Tue Nov 5 04:21:12 2024 00:09:59.544 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:59.544 slat (nsec): min=6600, max=60657, avg=25303.35, stdev=5692.03 00:09:59.544 clat (usec): min=409, max=1816, avg=919.15, stdev=150.65 00:09:59.544 lat (usec): min=436, max=1842, avg=944.45, stdev=152.18 00:09:59.544 clat percentiles (usec): 00:09:59.544 | 1.00th=[ 537], 5.00th=[ 676], 10.00th=[ 742], 20.00th=[ 799], 00:09:59.544 | 30.00th=[ 840], 40.00th=[ 881], 50.00th=[ 930], 60.00th=[ 979], 00:09:59.544 | 70.00th=[ 1004], 80.00th=[ 1037], 90.00th=[ 1074], 95.00th=[ 1123], 00:09:59.544 | 99.00th=[ 1205], 99.50th=[ 1500], 99.90th=[ 1811], 99.95th=[ 1811], 00:09:59.544 | 99.99th=[ 1811] 00:09:59.544 write: IOPS=692, BW=2769KiB/s (2836kB/s)(2772KiB/1001msec); 0 zone resets 00:09:59.544 slat (nsec): min=9029, max=57051, avg=30212.46, stdev=9451.05 00:09:59.544 clat (usec): min=222, max=986, avg=702.58, stdev=114.56 00:09:59.544 lat (usec): min=232, max=1019, avg=732.79, stdev=118.88 00:09:59.544 clat percentiles (usec): 00:09:59.544 | 1.00th=[ 375], 5.00th=[ 498], 10.00th=[ 553], 20.00th=[ 619], 00:09:59.544 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[ 709], 60.00th=[ 734], 00:09:59.544 | 70.00th=[ 775], 80.00th=[ 799], 90.00th=[ 848], 95.00th=[ 865], 00:09:59.544 | 99.00th=[ 914], 99.50th=[ 930], 99.90th=[ 988], 99.95th=[ 988], 00:09:59.544 | 99.99th=[ 988] 00:09:59.544 bw ( KiB/s): min= 4096, max= 4096, per=36.15%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.544 lat (usec) : 250=0.08%, 500=3.07%, 750=38.42%, 1000=44.56% 00:09:59.544 lat (msec) : 2=13.86% 00:09:59.544 cpu : usr=1.90%, sys=5.10%, ctx=1205, majf=0, minf=1 00:09:59.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.544 issued rwts: total=512,693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.544 job2: (groupid=0, jobs=1): err= 0: pid=2853335: Tue Nov 5 04:21:12 2024 00:09:59.544 read: IOPS=16, BW=66.5KiB/s (68.1kB/s)(68.0KiB/1023msec) 00:09:59.544 slat (nsec): min=27070, max=28318, avg=27575.18, stdev=330.87 00:09:59.544 clat (usec): min=41885, max=42245, avg=41978.63, stdev=90.18 00:09:59.544 lat (usec): min=41913, max=42272, avg=42006.21, stdev=90.15 00:09:59.544 clat percentiles (usec): 00:09:59.544 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:09:59.544 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:09:59.544 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:59.544 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:59.544 | 99.99th=[42206] 00:09:59.544 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:09:59.544 slat (nsec): min=9778, max=55727, avg=32362.15, stdev=9553.53 00:09:59.544 clat (usec): min=209, max=935, avg=564.19, stdev=122.15 00:09:59.544 lat (usec): min=220, max=971, avg=596.55, stdev=126.07 00:09:59.544 clat percentiles (usec): 00:09:59.544 | 1.00th=[ 289], 5.00th=[ 343], 10.00th=[ 408], 20.00th=[ 449], 00:09:59.544 | 30.00th=[ 510], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 603], 00:09:59.544 | 70.00th=[ 635], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 742], 00:09:59.544 | 99.00th=[ 816], 99.50th=[ 857], 99.90th=[ 938], 99.95th=[ 938], 00:09:59.544 | 99.99th=[ 938] 00:09:59.544 bw ( KiB/s): min= 4096, max= 4096, per=36.15%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.544 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.544 lat (usec) : 250=0.76%, 500=26.47%, 750=65.03%, 1000=4.54% 00:09:59.544 lat (msec) : 50=3.21% 00:09:59.544 cpu : usr=0.88%, sys=1.47%, ctx=531, majf=0, minf=1 00:09:59.544 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.544 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.544 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.544 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.544 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.544 job3: (groupid=0, jobs=1): err= 0: pid=2853336: Tue Nov 5 04:21:12 2024 00:09:59.544 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:09:59.544 slat (nsec): min=6963, max=46308, avg=25325.64, stdev=5876.13 00:09:59.544 clat (usec): min=407, max=1753, avg=856.36, stdev=132.82 00:09:59.544 lat (usec): min=433, max=1780, avg=881.68, stdev=133.79 00:09:59.544 clat percentiles (usec): 00:09:59.544 | 1.00th=[ 515], 5.00th=[ 627], 10.00th=[ 685], 20.00th=[ 758], 00:09:59.544 | 30.00th=[ 799], 40.00th=[ 840], 50.00th=[ 873], 60.00th=[ 898], 00:09:59.544 | 70.00th=[ 930], 80.00th=[ 955], 90.00th=[ 1004], 95.00th=[ 1057], 00:09:59.544 | 99.00th=[ 1106], 99.50th=[ 1139], 99.90th=[ 1762], 99.95th=[ 1762], 00:09:59.544 | 99.99th=[ 1762] 00:09:59.544 write: IOPS=982, BW=3928KiB/s (4022kB/s)(3932KiB/1001msec); 0 zone resets 00:09:59.544 slat (nsec): min=9462, max=53749, avg=30508.31, stdev=9255.36 00:09:59.544 clat (usec): min=120, max=964, avg=516.41, stdev=132.67 00:09:59.544 lat (usec): min=129, max=997, avg=546.92, stdev=135.97 00:09:59.544 clat percentiles (usec): 00:09:59.544 | 1.00th=[ 237], 5.00th=[ 302], 10.00th=[ 355], 20.00th=[ 404], 00:09:59.544 | 30.00th=[ 441], 40.00th=[ 478], 50.00th=[ 515], 60.00th=[ 545], 00:09:59.544 | 70.00th=[ 586], 80.00th=[ 635], 90.00th=[ 685], 95.00th=[ 734], 00:09:59.544 | 99.00th=[ 840], 99.50th=[ 898], 99.90th=[ 963], 99.95th=[ 963], 00:09:59.544 | 99.99th=[ 963] 00:09:59.545 bw ( KiB/s): min= 4096, max= 4096, per=36.15%, avg=4096.00, stdev= 0.00, samples=1 00:09:59.545 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:59.545 lat (usec) : 250=1.00%, 500=29.90%, 750=38.93%, 1000=26.56% 00:09:59.545 lat (msec) : 2=3.61% 00:09:59.545 cpu : usr=2.40%, sys=4.40%, ctx=1495, majf=0, minf=2 00:09:59.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:59.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:59.545 issued rwts: total=512,983,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:59.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:59.545 00:09:59.545 Run status group 0 (all jobs): 00:09:59.545 READ: bw=6072KiB/s (6218kB/s), 66.5KiB/s-2046KiB/s (68.1kB/s-2095kB/s), io=6212KiB (6361kB), run=1001-1023msec 00:09:59.545 WRITE: bw=11.1MiB/s (11.6MB/s), 2002KiB/s-3928KiB/s (2050kB/s-4022kB/s), io=11.3MiB (11.9MB), run=1001-1023msec 00:09:59.545 00:09:59.545 Disk stats (read/write): 00:09:59.545 nvme0n1: ios=525/512, merge=0/0, ticks=508/252, in_queue=760, util=87.17% 00:09:59.545 nvme0n2: ios=510/512, merge=0/0, ticks=457/304, in_queue=761, util=87.44% 00:09:59.545 nvme0n3: ios=69/512, merge=0/0, ticks=1245/270, in_queue=1515, util=96.72% 00:09:59.545 nvme0n4: ios=512/676, merge=0/0, ticks=426/332, in_queue=758, util=89.50% 00:09:59.545 04:21:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:59.545 [global] 00:09:59.545 thread=1 00:09:59.545 invalidate=1 00:09:59.545 rw=randwrite 00:09:59.545 time_based=1 00:09:59.545 runtime=1 00:09:59.545 ioengine=libaio 00:09:59.545 direct=1 00:09:59.545 bs=4096 00:09:59.545 iodepth=1 00:09:59.545 norandommap=0 00:09:59.545 numjobs=1 00:09:59.545 00:09:59.545 verify_dump=1 00:09:59.545 verify_backlog=512 00:09:59.545 verify_state_save=0 00:09:59.545 do_verify=1 00:09:59.545 verify=crc32c-intel 00:09:59.545 [job0] 00:09:59.545 filename=/dev/nvme0n1 00:09:59.545 [job1] 00:09:59.545 filename=/dev/nvme0n2 00:09:59.545 [job2] 00:09:59.545 filename=/dev/nvme0n3 00:09:59.545 [job3] 00:09:59.545 filename=/dev/nvme0n4 00:09:59.545 Could not set queue depth (nvme0n1) 00:09:59.545 Could not set queue depth (nvme0n2) 00:09:59.545 Could not set queue depth (nvme0n3) 00:09:59.545 Could not set queue depth (nvme0n4) 00:09:59.806 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.806 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.806 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.806 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:59.806 fio-3.35 00:09:59.806 Starting 4 threads 00:10:01.190 00:10:01.190 job0: (groupid=0, jobs=1): err= 0: pid=2853854: Tue Nov 5 04:21:14 2024 00:10:01.190 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:01.190 slat (nsec): min=6582, max=50056, avg=26795.49, stdev=5328.76 00:10:01.190 clat (usec): min=373, max=1651, avg=1011.18, stdev=228.20 00:10:01.190 lat (usec): min=402, max=1678, avg=1037.98, stdev=228.53 00:10:01.190 clat percentiles (usec): 00:10:01.190 | 1.00th=[ 441], 5.00th=[ 611], 10.00th=[ 660], 20.00th=[ 791], 00:10:01.190 | 30.00th=[ 873], 40.00th=[ 963], 50.00th=[ 1090], 60.00th=[ 1156], 00:10:01.190 | 70.00th=[ 1188], 80.00th=[ 1221], 90.00th=[ 1254], 95.00th=[ 1287], 00:10:01.190 | 99.00th=[ 1336], 99.50th=[ 1369], 99.90th=[ 1647], 99.95th=[ 1647], 00:10:01.190 | 99.99th=[ 1647] 00:10:01.190 write: IOPS=850, BW=3401KiB/s (3482kB/s)(3404KiB/1001msec); 0 zone resets 00:10:01.190 slat (nsec): min=9289, max=73434, avg=30177.94, stdev=9866.50 00:10:01.190 clat (usec): min=179, max=867, avg=508.01, stdev=144.82 00:10:01.190 lat (usec): min=189, max=919, avg=538.18, stdev=147.05 00:10:01.190 clat percentiles (usec): 00:10:01.190 | 1.00th=[ 212], 5.00th=[ 289], 10.00th=[ 314], 20.00th=[ 347], 00:10:01.190 | 30.00th=[ 429], 40.00th=[ 469], 50.00th=[ 506], 60.00th=[ 562], 00:10:01.190 | 70.00th=[ 603], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 742], 00:10:01.190 | 99.00th=[ 791], 99.50th=[ 816], 99.90th=[ 865], 99.95th=[ 865], 00:10:01.190 | 99.99th=[ 865] 00:10:01.190 bw ( KiB/s): min= 4096, max= 4096, per=33.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.190 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.190 lat (usec) : 250=1.39%, 500=29.27%, 750=35.73%, 1000=11.89% 00:10:01.190 lat (msec) : 2=21.72% 00:10:01.190 cpu : usr=2.30%, sys=4.60%, ctx=1365, majf=0, minf=1 00:10:01.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.190 issued rwts: total=512,851,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.190 job1: (groupid=0, jobs=1): err= 0: pid=2853855: Tue Nov 5 04:21:14 2024 00:10:01.190 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:01.190 slat (nsec): min=7127, max=46925, avg=26339.87, stdev=3274.15 00:10:01.190 clat (usec): min=536, max=1245, avg=973.64, stdev=108.17 00:10:01.190 lat (usec): min=562, max=1271, avg=999.98, stdev=108.89 00:10:01.190 clat percentiles (usec): 00:10:01.190 | 1.00th=[ 594], 5.00th=[ 775], 10.00th=[ 840], 20.00th=[ 906], 00:10:01.190 | 30.00th=[ 938], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 1012], 00:10:01.190 | 70.00th=[ 1037], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1106], 00:10:01.190 | 99.00th=[ 1172], 99.50th=[ 1221], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:01.190 | 99.99th=[ 1254] 00:10:01.190 write: IOPS=833, BW=3333KiB/s (3413kB/s)(3336KiB/1001msec); 0 zone resets 00:10:01.190 slat (nsec): min=8976, max=70886, avg=23543.27, stdev=11383.96 00:10:01.190 clat (usec): min=141, max=1984, avg=550.66, stdev=172.64 00:10:01.190 lat (usec): min=151, max=1993, avg=574.20, stdev=178.50 00:10:01.190 clat percentiles (usec): 00:10:01.190 | 1.00th=[ 235], 5.00th=[ 285], 10.00th=[ 310], 20.00th=[ 400], 00:10:01.190 | 30.00th=[ 453], 40.00th=[ 515], 50.00th=[ 562], 60.00th=[ 603], 00:10:01.190 | 70.00th=[ 644], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 799], 00:10:01.190 | 99.00th=[ 938], 99.50th=[ 947], 99.90th=[ 1991], 99.95th=[ 1991], 00:10:01.190 | 99.99th=[ 1991] 00:10:01.190 bw ( KiB/s): min= 4096, max= 4096, per=33.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.190 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.190 lat (usec) : 250=0.97%, 500=22.66%, 750=33.88%, 1000=25.04% 00:10:01.190 lat (msec) : 2=17.46% 00:10:01.190 cpu : usr=2.50%, sys=4.10%, ctx=1346, majf=0, minf=1 00:10:01.190 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.190 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.190 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.190 issued rwts: total=512,834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.190 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.190 job2: (groupid=0, jobs=1): err= 0: pid=2853856: Tue Nov 5 04:21:14 2024 00:10:01.190 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:01.190 slat (nsec): min=27370, max=46999, avg=28237.20, stdev=2127.04 00:10:01.190 clat (usec): min=611, max=1248, avg=993.95, stdev=91.55 00:10:01.190 lat (usec): min=639, max=1276, avg=1022.19, stdev=91.41 00:10:01.190 clat percentiles (usec): 00:10:01.190 | 1.00th=[ 742], 5.00th=[ 816], 10.00th=[ 881], 20.00th=[ 922], 00:10:01.190 | 30.00th=[ 963], 40.00th=[ 988], 50.00th=[ 1004], 60.00th=[ 1029], 00:10:01.190 | 70.00th=[ 1045], 80.00th=[ 1057], 90.00th=[ 1090], 95.00th=[ 1123], 00:10:01.190 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1254], 99.95th=[ 1254], 00:10:01.190 | 99.99th=[ 1254] 00:10:01.191 write: IOPS=727, BW=2909KiB/s (2979kB/s)(2912KiB/1001msec); 0 zone resets 00:10:01.191 slat (nsec): min=9344, max=78342, avg=32212.04, stdev=9115.36 00:10:01.191 clat (usec): min=221, max=1895, avg=608.70, stdev=130.13 00:10:01.191 lat (usec): min=256, max=1935, avg=640.91, stdev=133.11 00:10:01.191 clat percentiles (usec): 00:10:01.191 | 1.00th=[ 326], 5.00th=[ 383], 10.00th=[ 453], 20.00th=[ 498], 00:10:01.191 | 30.00th=[ 545], 40.00th=[ 586], 50.00th=[ 619], 60.00th=[ 652], 00:10:01.191 | 70.00th=[ 676], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 791], 00:10:01.191 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 1893], 99.95th=[ 1893], 00:10:01.191 | 99.99th=[ 1893] 00:10:01.191 bw ( KiB/s): min= 4096, max= 4096, per=33.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.191 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.191 lat (usec) : 250=0.16%, 500=11.77%, 750=39.60%, 1000=26.45% 00:10:01.191 lat (msec) : 2=22.02% 00:10:01.191 cpu : usr=2.70%, sys=5.00%, ctx=1242, majf=0, minf=1 00:10:01.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.191 issued rwts: total=512,728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.191 job3: (groupid=0, jobs=1): err= 0: pid=2853857: Tue Nov 5 04:21:14 2024 00:10:01.191 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:10:01.191 slat (nsec): min=7267, max=60778, avg=27262.35, stdev=3580.22 00:10:01.191 clat (usec): min=551, max=1211, avg=938.19, stdev=147.08 00:10:01.191 lat (usec): min=578, max=1252, avg=965.45, stdev=147.09 00:10:01.191 clat percentiles (usec): 00:10:01.191 | 1.00th=[ 627], 5.00th=[ 717], 10.00th=[ 750], 20.00th=[ 783], 00:10:01.191 | 30.00th=[ 824], 40.00th=[ 873], 50.00th=[ 971], 60.00th=[ 1029], 00:10:01.191 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1106], 95.00th=[ 1139], 00:10:01.191 | 99.00th=[ 1188], 99.50th=[ 1188], 99.90th=[ 1205], 99.95th=[ 1205], 00:10:01.191 | 99.99th=[ 1205] 00:10:01.191 write: IOPS=644, BW=2577KiB/s (2639kB/s)(2580KiB/1001msec); 0 zone resets 00:10:01.191 slat (nsec): min=9108, max=70061, avg=31828.97, stdev=7966.36 00:10:01.191 clat (usec): min=231, max=1841, avg=737.43, stdev=178.04 00:10:01.191 lat (usec): min=264, max=1877, avg=769.26, stdev=179.61 00:10:01.191 clat percentiles (usec): 00:10:01.191 | 1.00th=[ 247], 5.00th=[ 408], 10.00th=[ 506], 20.00th=[ 603], 00:10:01.191 | 30.00th=[ 685], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 799], 00:10:01.191 | 70.00th=[ 840], 80.00th=[ 873], 90.00th=[ 914], 95.00th=[ 938], 00:10:01.191 | 99.00th=[ 1012], 99.50th=[ 1434], 99.90th=[ 1844], 99.95th=[ 1844], 00:10:01.191 | 99.99th=[ 1844] 00:10:01.191 bw ( KiB/s): min= 4096, max= 4096, per=33.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:01.191 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:01.191 lat (usec) : 250=0.78%, 500=4.41%, 750=25.58%, 1000=48.40% 00:10:01.191 lat (msec) : 2=20.83% 00:10:01.191 cpu : usr=2.40%, sys=4.70%, ctx=1158, majf=0, minf=2 00:10:01.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:01.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:01.191 issued rwts: total=512,645,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:01.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:01.191 00:10:01.191 Run status group 0 (all jobs): 00:10:01.191 READ: bw=8184KiB/s (8380kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:10:01.191 WRITE: bw=11.9MiB/s (12.5MB/s), 2577KiB/s-3401KiB/s (2639kB/s-3482kB/s), io=11.9MiB (12.5MB), run=1001-1001msec 00:10:01.191 00:10:01.191 Disk stats (read/write): 00:10:01.191 nvme0n1: ios=564/601, merge=0/0, ticks=1213/227, in_queue=1440, util=96.89% 00:10:01.191 nvme0n2: ios=545/593, merge=0/0, ticks=507/274, in_queue=781, util=87.45% 00:10:01.191 nvme0n3: ios=518/512, merge=0/0, ticks=691/258, in_queue=949, util=99.89% 00:10:01.191 nvme0n4: ios=441/512, merge=0/0, ticks=374/308, in_queue=682, util=89.53% 00:10:01.191 04:21:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:01.191 [global] 00:10:01.191 thread=1 00:10:01.191 invalidate=1 00:10:01.191 rw=write 00:10:01.191 time_based=1 00:10:01.191 runtime=1 00:10:01.191 ioengine=libaio 00:10:01.191 direct=1 00:10:01.191 bs=4096 00:10:01.191 iodepth=128 00:10:01.191 norandommap=0 00:10:01.191 numjobs=1 00:10:01.191 00:10:01.191 verify_dump=1 00:10:01.191 verify_backlog=512 00:10:01.191 verify_state_save=0 00:10:01.191 do_verify=1 00:10:01.191 verify=crc32c-intel 00:10:01.191 [job0] 00:10:01.191 filename=/dev/nvme0n1 00:10:01.191 [job1] 00:10:01.191 filename=/dev/nvme0n2 00:10:01.191 [job2] 00:10:01.191 filename=/dev/nvme0n3 00:10:01.191 [job3] 00:10:01.191 filename=/dev/nvme0n4 00:10:01.191 Could not set queue depth (nvme0n1) 00:10:01.191 Could not set queue depth (nvme0n2) 00:10:01.191 Could not set queue depth (nvme0n3) 00:10:01.191 Could not set queue depth (nvme0n4) 00:10:01.452 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.452 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.452 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.452 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:01.452 fio-3.35 00:10:01.452 Starting 4 threads 00:10:02.837 00:10:02.837 job0: (groupid=0, jobs=1): err= 0: pid=2854382: Tue Nov 5 04:21:16 2024 00:10:02.837 read: IOPS=6809, BW=26.6MiB/s (27.9MB/s)(26.7MiB/1004msec) 00:10:02.837 slat (nsec): min=879, max=13451k, avg=77089.25, stdev=554160.95 00:10:02.837 clat (usec): min=2928, max=45429, avg=9772.24, stdev=6845.53 00:10:02.837 lat (usec): min=3421, max=45457, avg=9849.33, stdev=6907.60 00:10:02.837 clat percentiles (usec): 00:10:02.837 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6521], 20.00th=[ 7111], 00:10:02.837 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7832], 00:10:02.837 | 70.00th=[ 8094], 80.00th=[ 9110], 90.00th=[14091], 95.00th=[31065], 00:10:02.837 | 99.00th=[36439], 99.50th=[38011], 99.90th=[44303], 99.95th=[44303], 00:10:02.837 | 99.99th=[45351] 00:10:02.837 write: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec); 0 zone resets 00:10:02.837 slat (nsec): min=1535, max=10382k, avg=61419.91, stdev=383665.24 00:10:02.837 clat (usec): min=1139, max=30900, avg=8439.92, stdev=3123.44 00:10:02.837 lat (usec): min=1149, max=30902, avg=8501.34, stdev=3153.96 00:10:02.837 clat percentiles (usec): 00:10:02.837 | 1.00th=[ 4490], 5.00th=[ 5997], 10.00th=[ 6587], 20.00th=[ 6915], 00:10:02.837 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7177], 60.00th=[ 7373], 00:10:02.837 | 70.00th=[ 7767], 80.00th=[ 9634], 90.00th=[13566], 95.00th=[14222], 00:10:02.837 | 99.00th=[22152], 99.50th=[22152], 99.90th=[25035], 99.95th=[30802], 00:10:02.837 | 99.99th=[30802] 00:10:02.837 bw ( KiB/s): min=21880, max=35464, per=30.42%, avg=28672.00, stdev=9605.34, samples=2 00:10:02.838 iops : min= 5470, max= 8866, avg=7168.00, stdev=2401.33, samples=2 00:10:02.838 lat (msec) : 2=0.04%, 4=0.49%, 10=82.44%, 20=12.82%, 50=4.21% 00:10:02.838 cpu : usr=3.59%, sys=5.68%, ctx=867, majf=0, minf=1 00:10:02.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:02.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.838 issued rwts: total=6837,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.838 job1: (groupid=0, jobs=1): err= 0: pid=2854383: Tue Nov 5 04:21:16 2024 00:10:02.838 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:10:02.838 slat (nsec): min=902, max=12736k, avg=68867.76, stdev=489286.95 00:10:02.838 clat (usec): min=3221, max=27757, avg=8480.56, stdev=3075.77 00:10:02.838 lat (usec): min=3228, max=27760, avg=8549.43, stdev=3115.81 00:10:02.838 clat percentiles (usec): 00:10:02.838 | 1.00th=[ 4752], 5.00th=[ 5932], 10.00th=[ 6783], 20.00th=[ 7177], 00:10:02.838 | 30.00th=[ 7308], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:10:02.838 | 70.00th=[ 7963], 80.00th=[ 8586], 90.00th=[11076], 95.00th=[15139], 00:10:02.838 | 99.00th=[21627], 99.50th=[23462], 99.90th=[27132], 99.95th=[27657], 00:10:02.838 | 99.99th=[27657] 00:10:02.838 write: IOPS=7608, BW=29.7MiB/s (31.2MB/s)(29.8MiB/1004msec); 0 zone resets 00:10:02.838 slat (nsec): min=1531, max=12075k, avg=60520.10, stdev=360439.59 00:10:02.838 clat (usec): min=591, max=34618, avg=8711.93, stdev=4399.16 00:10:02.838 lat (usec): min=599, max=34623, avg=8772.45, stdev=4431.02 00:10:02.838 clat percentiles (usec): 00:10:02.838 | 1.00th=[ 1729], 5.00th=[ 4621], 10.00th=[ 6390], 20.00th=[ 6915], 00:10:02.838 | 30.00th=[ 6980], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:10:02.838 | 70.00th=[ 7832], 80.00th=[10945], 90.00th=[13566], 95.00th=[15270], 00:10:02.838 | 99.00th=[30016], 99.50th=[30802], 99.90th=[31851], 99.95th=[32375], 00:10:02.838 | 99.99th=[34866] 00:10:02.838 bw ( KiB/s): min=24576, max=35520, per=31.88%, avg=30048.00, stdev=7738.58, samples=2 00:10:02.838 iops : min= 6144, max= 8880, avg=7512.00, stdev=1934.64, samples=2 00:10:02.838 lat (usec) : 750=0.02%, 1000=0.06% 00:10:02.838 lat (msec) : 2=0.49%, 4=1.63%, 10=81.01%, 20=14.60%, 50=2.19% 00:10:02.838 cpu : usr=4.49%, sys=6.98%, ctx=825, majf=0, minf=1 00:10:02.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:02.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.838 issued rwts: total=7168,7639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.838 job2: (groupid=0, jobs=1): err= 0: pid=2854384: Tue Nov 5 04:21:16 2024 00:10:02.838 read: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec) 00:10:02.838 slat (nsec): min=932, max=7812.8k, avg=88699.36, stdev=571220.60 00:10:02.838 clat (usec): min=3297, max=41980, avg=11758.83, stdev=4147.85 00:10:02.838 lat (usec): min=3304, max=41986, avg=11847.53, stdev=4189.21 00:10:02.838 clat percentiles (usec): 00:10:02.838 | 1.00th=[ 5800], 5.00th=[ 6587], 10.00th=[ 7373], 20.00th=[ 8160], 00:10:02.838 | 30.00th=[ 9241], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:10:02.838 | 70.00th=[12911], 80.00th=[14615], 90.00th=[16909], 95.00th=[20055], 00:10:02.838 | 99.00th=[25035], 99.50th=[25560], 99.90th=[29230], 99.95th=[29230], 00:10:02.838 | 99.99th=[42206] 00:10:02.838 write: IOPS=5161, BW=20.2MiB/s (21.1MB/s)(20.2MiB/1004msec); 0 zone resets 00:10:02.838 slat (nsec): min=1588, max=12711k, avg=96407.41, stdev=522646.63 00:10:02.838 clat (usec): min=1227, max=48800, avg=12969.89, stdev=7933.87 00:10:02.838 lat (usec): min=1238, max=48810, avg=13066.30, stdev=7984.33 00:10:02.838 clat percentiles (usec): 00:10:02.838 | 1.00th=[ 3916], 5.00th=[ 4621], 10.00th=[ 4948], 20.00th=[ 7111], 00:10:02.838 | 30.00th=[ 7701], 40.00th=[ 8586], 50.00th=[10814], 60.00th=[13698], 00:10:02.838 | 70.00th=[15926], 80.00th=[16319], 90.00th=[23725], 95.00th=[28443], 00:10:02.838 | 99.00th=[42730], 99.50th=[44303], 99.90th=[49021], 99.95th=[49021], 00:10:02.838 | 99.99th=[49021] 00:10:02.838 bw ( KiB/s): min=16136, max=24824, per=21.73%, avg=20480.00, stdev=6143.34, samples=2 00:10:02.838 iops : min= 4034, max= 6206, avg=5120.00, stdev=1535.84, samples=2 00:10:02.838 lat (msec) : 2=0.02%, 4=0.71%, 10=39.45%, 20=50.45%, 50=9.38% 00:10:02.838 cpu : usr=4.59%, sys=4.69%, ctx=453, majf=0, minf=1 00:10:02.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:02.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.838 issued rwts: total=5120,5182,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.838 job3: (groupid=0, jobs=1): err= 0: pid=2854385: Tue Nov 5 04:21:16 2024 00:10:02.838 read: IOPS=4164, BW=16.3MiB/s (17.1MB/s)(17.0MiB/1044msec) 00:10:02.838 slat (nsec): min=924, max=10789k, avg=110986.45, stdev=669458.15 00:10:02.838 clat (usec): min=5038, max=56473, avg=14796.46, stdev=7672.72 00:10:02.838 lat (usec): min=5043, max=59245, avg=14907.45, stdev=7703.33 00:10:02.838 clat percentiles (usec): 00:10:02.838 | 1.00th=[ 8029], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10421], 00:10:02.838 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11863], 60.00th=[13698], 00:10:02.838 | 70.00th=[15401], 80.00th=[17957], 90.00th=[21103], 95.00th=[24249], 00:10:02.838 | 99.00th=[55837], 99.50th=[56361], 99.90th=[56361], 99.95th=[56361], 00:10:02.838 | 99.99th=[56361] 00:10:02.838 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:10:02.838 slat (nsec): min=1550, max=8595.6k, avg=107653.37, stdev=531352.23 00:10:02.838 clat (usec): min=1244, max=31747, avg=14764.45, stdev=5412.95 00:10:02.838 lat (usec): min=1254, max=31749, avg=14872.11, stdev=5455.75 00:10:02.838 clat percentiles (usec): 00:10:02.838 | 1.00th=[ 6980], 5.00th=[ 7832], 10.00th=[ 8455], 20.00th=[10028], 00:10:02.838 | 30.00th=[10814], 40.00th=[12518], 50.00th=[13829], 60.00th=[15664], 00:10:02.838 | 70.00th=[16057], 80.00th=[18744], 90.00th=[23987], 95.00th=[25822], 00:10:02.838 | 99.00th=[27395], 99.50th=[28967], 99.90th=[31851], 99.95th=[31851], 00:10:02.838 | 99.99th=[31851] 00:10:02.838 bw ( KiB/s): min=16384, max=20480, per=19.56%, avg=18432.00, stdev=2896.31, samples=2 00:10:02.838 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:02.838 lat (msec) : 2=0.02%, 4=0.07%, 10=15.68%, 20=68.32%, 50=15.21% 00:10:02.838 lat (msec) : 100=0.70% 00:10:02.838 cpu : usr=2.40%, sys=4.70%, ctx=510, majf=0, minf=1 00:10:02.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:02.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:02.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:02.838 issued rwts: total=4348,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:02.838 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:02.838 00:10:02.838 Run status group 0 (all jobs): 00:10:02.838 READ: bw=87.8MiB/s (92.1MB/s), 16.3MiB/s-27.9MiB/s (17.1MB/s-29.2MB/s), io=91.7MiB (96.1MB), run=1004-1044msec 00:10:02.838 WRITE: bw=92.0MiB/s (96.5MB/s), 17.2MiB/s-29.7MiB/s (18.1MB/s-31.2MB/s), io=96.1MiB (101MB), run=1004-1044msec 00:10:02.838 00:10:02.838 Disk stats (read/write): 00:10:02.838 nvme0n1: ios=5516/5632, merge=0/0, ticks=31451/29079, in_queue=60530, util=87.88% 00:10:02.838 nvme0n2: ios=5795/6144, merge=0/0, ticks=33342/38874, in_queue=72216, util=91.95% 00:10:02.838 nvme0n3: ios=4398/4608, merge=0/0, ticks=34276/35945, in_queue=70221, util=87.45% 00:10:02.838 nvme0n4: ios=3584/3941, merge=0/0, ticks=27746/32063, in_queue=59809, util=89.53% 00:10:02.838 04:21:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:02.838 [global] 00:10:02.838 thread=1 00:10:02.838 invalidate=1 00:10:02.838 rw=randwrite 00:10:02.838 time_based=1 00:10:02.838 runtime=1 00:10:02.838 ioengine=libaio 00:10:02.838 direct=1 00:10:02.838 bs=4096 00:10:02.838 iodepth=128 00:10:02.838 norandommap=0 00:10:02.838 numjobs=1 00:10:02.838 00:10:02.838 verify_dump=1 00:10:02.838 verify_backlog=512 00:10:02.838 verify_state_save=0 00:10:02.838 do_verify=1 00:10:02.838 verify=crc32c-intel 00:10:02.838 [job0] 00:10:02.838 filename=/dev/nvme0n1 00:10:02.838 [job1] 00:10:02.838 filename=/dev/nvme0n2 00:10:02.838 [job2] 00:10:02.838 filename=/dev/nvme0n3 00:10:02.838 [job3] 00:10:02.838 filename=/dev/nvme0n4 00:10:02.838 Could not set queue depth (nvme0n1) 00:10:02.838 Could not set queue depth (nvme0n2) 00:10:02.838 Could not set queue depth (nvme0n3) 00:10:02.838 Could not set queue depth (nvme0n4) 00:10:03.099 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.099 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.099 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.099 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:03.099 fio-3.35 00:10:03.099 Starting 4 threads 00:10:04.484 00:10:04.484 job0: (groupid=0, jobs=1): err= 0: pid=2854903: Tue Nov 5 04:21:17 2024 00:10:04.484 read: IOPS=7118, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1007msec) 00:10:04.484 slat (nsec): min=925, max=15307k, avg=73699.58, stdev=570077.92 00:10:04.484 clat (usec): min=1353, max=44779, avg=9348.67, stdev=4672.53 00:10:04.484 lat (usec): min=1390, max=44785, avg=9422.37, stdev=4714.76 00:10:04.484 clat percentiles (usec): 00:10:04.484 | 1.00th=[ 4686], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 6980], 00:10:04.484 | 30.00th=[ 7308], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 8848], 00:10:04.484 | 70.00th=[ 9110], 80.00th=[11076], 90.00th=[12518], 95.00th=[14222], 00:10:04.484 | 99.00th=[33817], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:10:04.484 | 99.99th=[44827] 00:10:04.484 write: IOPS=7449, BW=29.1MiB/s (30.5MB/s)(29.3MiB/1007msec); 0 zone resets 00:10:04.484 slat (nsec): min=1559, max=15881k, avg=56672.71, stdev=420043.05 00:10:04.484 clat (usec): min=1263, max=51885, avg=8103.48, stdev=4425.13 00:10:04.484 lat (usec): min=1274, max=51907, avg=8160.16, stdev=4466.92 00:10:04.484 clat percentiles (usec): 00:10:04.484 | 1.00th=[ 2933], 5.00th=[ 4047], 10.00th=[ 4555], 20.00th=[ 6128], 00:10:04.484 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7570], 60.00th=[ 8029], 00:10:04.484 | 70.00th=[ 8455], 80.00th=[ 8848], 90.00th=[10159], 95.00th=[11863], 00:10:04.484 | 99.00th=[28705], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:04.484 | 99.99th=[51643] 00:10:04.484 bw ( KiB/s): min=26416, max=32584, per=29.22%, avg=29500.00, stdev=4361.43, samples=2 00:10:04.484 iops : min= 6604, max= 8146, avg=7375.00, stdev=1090.36, samples=2 00:10:04.484 lat (msec) : 2=0.07%, 4=2.69%, 10=79.95%, 20=14.65%, 50=2.63% 00:10:04.484 lat (msec) : 100=0.01% 00:10:04.484 cpu : usr=3.78%, sys=8.25%, ctx=658, majf=0, minf=1 00:10:04.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:04.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.484 issued rwts: total=7168,7502,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.484 job1: (groupid=0, jobs=1): err= 0: pid=2854904: Tue Nov 5 04:21:17 2024 00:10:04.484 read: IOPS=6609, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1007msec) 00:10:04.484 slat (nsec): min=932, max=8924.1k, avg=68540.47, stdev=494451.69 00:10:04.484 clat (usec): min=2616, max=21391, avg=8896.75, stdev=2388.40 00:10:04.484 lat (usec): min=2622, max=21405, avg=8965.29, stdev=2425.59 00:10:04.484 clat percentiles (usec): 00:10:04.484 | 1.00th=[ 4047], 5.00th=[ 5997], 10.00th=[ 6652], 20.00th=[ 7242], 00:10:04.484 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8717], 00:10:04.484 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11994], 95.00th=[14091], 00:10:04.484 | 99.00th=[16188], 99.50th=[16712], 99.90th=[19530], 99.95th=[20317], 00:10:04.484 | 99.99th=[21365] 00:10:04.484 write: IOPS=6814, BW=26.6MiB/s (27.9MB/s)(26.8MiB/1007msec); 0 zone resets 00:10:04.484 slat (nsec): min=1571, max=16850k, avg=73450.61, stdev=611970.13 00:10:04.484 clat (usec): min=1276, max=47356, avg=9990.12, stdev=6699.40 00:10:04.484 lat (usec): min=1285, max=47388, avg=10063.57, stdev=6754.82 00:10:04.484 clat percentiles (usec): 00:10:04.484 | 1.00th=[ 3458], 5.00th=[ 4555], 10.00th=[ 5080], 20.00th=[ 6980], 00:10:04.484 | 30.00th=[ 7373], 40.00th=[ 7767], 50.00th=[ 8029], 60.00th=[ 8455], 00:10:04.484 | 70.00th=[ 9503], 80.00th=[10028], 90.00th=[16188], 95.00th=[28181], 00:10:04.484 | 99.00th=[37487], 99.50th=[41157], 99.90th=[41157], 99.95th=[44303], 00:10:04.484 | 99.99th=[47449] 00:10:04.484 bw ( KiB/s): min=26752, max=27128, per=26.69%, avg=26940.00, stdev=265.87, samples=2 00:10:04.484 iops : min= 6688, max= 6782, avg=6735.00, stdev=66.47, samples=2 00:10:04.484 lat (msec) : 2=0.16%, 4=1.68%, 10=76.42%, 20=17.37%, 50=4.37% 00:10:04.484 cpu : usr=5.07%, sys=6.26%, ctx=422, majf=0, minf=2 00:10:04.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:04.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.484 issued rwts: total=6656,6862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.484 job2: (groupid=0, jobs=1): err= 0: pid=2854905: Tue Nov 5 04:21:17 2024 00:10:04.484 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:04.484 slat (nsec): min=934, max=17123k, avg=98859.89, stdev=651021.71 00:10:04.484 clat (usec): min=4720, max=33924, avg=12264.77, stdev=4011.99 00:10:04.484 lat (usec): min=4730, max=34188, avg=12363.63, stdev=4070.15 00:10:04.484 clat percentiles (usec): 00:10:04.484 | 1.00th=[ 7570], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10028], 00:10:04.484 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11600], 00:10:04.484 | 70.00th=[12125], 80.00th=[14353], 90.00th=[15401], 95.00th=[19006], 00:10:04.484 | 99.00th=[31327], 99.50th=[31851], 99.90th=[33424], 99.95th=[33817], 00:10:04.484 | 99.99th=[33817] 00:10:04.484 write: IOPS=4891, BW=19.1MiB/s (20.0MB/s)(19.2MiB/1003msec); 0 zone resets 00:10:04.484 slat (nsec): min=1541, max=9673.1k, avg=97960.98, stdev=541292.45 00:10:04.484 clat (usec): min=1565, max=69596, avg=14430.53, stdev=9742.24 00:10:04.484 lat (usec): min=1569, max=69598, avg=14528.49, stdev=9784.26 00:10:04.484 clat percentiles (usec): 00:10:04.484 | 1.00th=[ 2868], 5.00th=[ 5342], 10.00th=[ 6718], 20.00th=[ 8356], 00:10:04.484 | 30.00th=[ 9110], 40.00th=[ 9765], 50.00th=[11338], 60.00th=[12387], 00:10:04.484 | 70.00th=[15270], 80.00th=[17957], 90.00th=[28705], 95.00th=[35390], 00:10:04.484 | 99.00th=[55837], 99.50th=[64750], 99.90th=[69731], 99.95th=[69731], 00:10:04.484 | 99.99th=[69731] 00:10:04.484 bw ( KiB/s): min=18728, max=19504, per=18.94%, avg=19116.00, stdev=548.71, samples=2 00:10:04.484 iops : min= 4682, max= 4876, avg=4779.00, stdev=137.18, samples=2 00:10:04.484 lat (msec) : 2=0.17%, 4=0.86%, 10=30.31%, 20=58.10%, 50=9.99% 00:10:04.484 lat (msec) : 100=0.57% 00:10:04.484 cpu : usr=2.89%, sys=4.69%, ctx=491, majf=0, minf=1 00:10:04.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:04.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.484 issued rwts: total=4608,4906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.484 job3: (groupid=0, jobs=1): err= 0: pid=2854906: Tue Nov 5 04:21:17 2024 00:10:04.484 read: IOPS=5903, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1005msec) 00:10:04.484 slat (nsec): min=971, max=15363k, avg=78656.43, stdev=553960.46 00:10:04.484 clat (usec): min=3328, max=25832, avg=10598.50, stdev=3292.62 00:10:04.484 lat (usec): min=3335, max=28172, avg=10677.16, stdev=3323.61 00:10:04.484 clat percentiles (usec): 00:10:04.484 | 1.00th=[ 4146], 5.00th=[ 6587], 10.00th=[ 7635], 20.00th=[ 8029], 00:10:04.484 | 30.00th=[ 8291], 40.00th=[ 8848], 50.00th=[10159], 60.00th=[11338], 00:10:04.484 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13960], 95.00th=[16188], 00:10:04.484 | 99.00th=[22152], 99.50th=[23200], 99.90th=[25560], 99.95th=[25560], 00:10:04.484 | 99.99th=[25822] 00:10:04.484 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:10:04.484 slat (nsec): min=1659, max=11823k, avg=74367.02, stdev=460510.11 00:10:04.484 clat (usec): min=445, max=24828, avg=10512.08, stdev=3186.84 00:10:04.484 lat (usec): min=479, max=24831, avg=10586.44, stdev=3215.21 00:10:04.484 clat percentiles (usec): 00:10:04.484 | 1.00th=[ 3490], 5.00th=[ 6194], 10.00th=[ 7570], 20.00th=[ 8094], 00:10:04.484 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[10552], 60.00th=[11207], 00:10:04.484 | 70.00th=[11994], 80.00th=[12387], 90.00th=[14091], 95.00th=[16909], 00:10:04.484 | 99.00th=[19006], 99.50th=[21103], 99.90th=[23725], 99.95th=[24511], 00:10:04.484 | 99.99th=[24773] 00:10:04.484 bw ( KiB/s): min=20488, max=28664, per=24.34%, avg=24576.00, stdev=5781.31, samples=2 00:10:04.484 iops : min= 5122, max= 7166, avg=6144.00, stdev=1445.33, samples=2 00:10:04.484 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:04.484 lat (msec) : 2=0.15%, 4=0.78%, 10=46.76%, 20=50.63%, 50=1.66% 00:10:04.484 cpu : usr=4.08%, sys=6.47%, ctx=537, majf=0, minf=1 00:10:04.484 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:04.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:04.484 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:04.484 issued rwts: total=5933,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:04.484 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:04.484 00:10:04.484 Run status group 0 (all jobs): 00:10:04.484 READ: bw=94.5MiB/s (99.1MB/s), 17.9MiB/s-27.8MiB/s (18.8MB/s-29.2MB/s), io=95.2MiB (99.8MB), run=1003-1007msec 00:10:04.484 WRITE: bw=98.6MiB/s (103MB/s), 19.1MiB/s-29.1MiB/s (20.0MB/s-30.5MB/s), io=99.3MiB (104MB), run=1003-1007msec 00:10:04.484 00:10:04.484 Disk stats (read/write): 00:10:04.484 nvme0n1: ios=6048/6144, merge=0/0, ticks=47415/41212, in_queue=88627, util=95.39% 00:10:04.484 nvme0n2: ios=5157/5591, merge=0/0, ticks=26578/30999, in_queue=57577, util=87.05% 00:10:04.484 nvme0n3: ios=3634/3895, merge=0/0, ticks=33068/47518, in_queue=80586, util=96.52% 00:10:04.484 nvme0n4: ios=5160/5159, merge=0/0, ticks=33285/32505, in_queue=65790, util=91.56% 00:10:04.484 04:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:04.485 04:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2855244 00:10:04.485 04:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:04.485 04:21:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:04.485 [global] 00:10:04.485 thread=1 00:10:04.485 invalidate=1 00:10:04.485 rw=read 00:10:04.485 time_based=1 00:10:04.485 runtime=10 00:10:04.485 ioengine=libaio 00:10:04.485 direct=1 00:10:04.485 bs=4096 00:10:04.485 iodepth=1 00:10:04.485 norandommap=1 00:10:04.485 numjobs=1 00:10:04.485 00:10:04.485 [job0] 00:10:04.485 filename=/dev/nvme0n1 00:10:04.485 [job1] 00:10:04.485 filename=/dev/nvme0n2 00:10:04.485 [job2] 00:10:04.485 filename=/dev/nvme0n3 00:10:04.485 [job3] 00:10:04.485 filename=/dev/nvme0n4 00:10:04.485 Could not set queue depth (nvme0n1) 00:10:04.485 Could not set queue depth (nvme0n2) 00:10:04.485 Could not set queue depth (nvme0n3) 00:10:04.485 Could not set queue depth (nvme0n4) 00:10:04.745 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.745 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.745 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.745 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:04.745 fio-3.35 00:10:04.745 Starting 4 threads 00:10:07.287 04:21:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:07.547 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=692224, buflen=4096 00:10:07.547 fio: pid=2855436, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:07.547 04:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:07.807 04:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:07.807 04:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:07.807 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=290816, buflen=4096 00:10:07.807 fio: pid=2855435, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:08.079 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=15904768, buflen=4096 00:10:08.079 fio: pid=2855433, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:08.079 04:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.079 04:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:08.079 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=15712256, buflen=4096 00:10:08.079 fio: pid=2855434, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:08.079 04:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.079 04:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:08.079 00:10:08.079 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2855433: Tue Nov 5 04:21:21 2024 00:10:08.079 read: IOPS=1308, BW=5231KiB/s (5357kB/s)(15.2MiB/2969msec) 00:10:08.079 slat (usec): min=3, max=36053, avg=39.92, stdev=677.00 00:10:08.079 clat (usec): min=194, max=1182, avg=713.78, stdev=92.68 00:10:08.079 lat (usec): min=213, max=36779, avg=749.65, stdev=634.73 00:10:08.079 clat percentiles (usec): 00:10:08.079 | 1.00th=[ 412], 5.00th=[ 553], 10.00th=[ 603], 20.00th=[ 660], 00:10:08.079 | 30.00th=[ 685], 40.00th=[ 701], 50.00th=[ 717], 60.00th=[ 742], 00:10:08.079 | 70.00th=[ 766], 80.00th=[ 791], 90.00th=[ 824], 95.00th=[ 848], 00:10:08.079 | 99.00th=[ 889], 99.50th=[ 930], 99.90th=[ 1012], 99.95th=[ 1057], 00:10:08.079 | 99.99th=[ 1188] 00:10:08.079 bw ( KiB/s): min= 5056, max= 5608, per=53.20%, avg=5387.20, stdev=227.73, samples=5 00:10:08.079 iops : min= 1264, max= 1402, avg=1346.80, stdev=56.93, samples=5 00:10:08.079 lat (usec) : 250=0.05%, 500=2.70%, 750=61.84%, 1000=35.25% 00:10:08.079 lat (msec) : 2=0.13% 00:10:08.079 cpu : usr=1.52%, sys=3.20%, ctx=3887, majf=0, minf=1 00:10:08.079 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.079 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.079 issued rwts: total=3884,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.079 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.079 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2855434: Tue Nov 5 04:21:21 2024 00:10:08.079 read: IOPS=1220, BW=4880KiB/s (4998kB/s)(15.0MiB/3144msec) 00:10:08.079 slat (usec): min=6, max=17926, avg=41.49, stdev=471.39 00:10:08.079 clat (usec): min=184, max=41282, avg=769.85, stdev=670.22 00:10:08.079 lat (usec): min=191, max=41306, avg=811.34, stdev=818.91 00:10:08.079 clat percentiles (usec): 00:10:08.079 | 1.00th=[ 424], 5.00th=[ 545], 10.00th=[ 594], 20.00th=[ 652], 00:10:08.079 | 30.00th=[ 685], 40.00th=[ 709], 50.00th=[ 734], 60.00th=[ 766], 00:10:08.079 | 70.00th=[ 807], 80.00th=[ 898], 90.00th=[ 988], 95.00th=[ 1020], 00:10:08.079 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1221], 00:10:08.079 | 99.99th=[41157] 00:10:08.079 bw ( KiB/s): min= 3864, max= 5600, per=48.40%, avg=4901.67, stdev=767.99, samples=6 00:10:08.080 iops : min= 966, max= 1400, avg=1225.33, stdev=192.01, samples=6 00:10:08.080 lat (usec) : 250=0.10%, 500=2.63%, 750=52.05%, 1000=37.22% 00:10:08.080 lat (msec) : 2=7.95%, 50=0.03% 00:10:08.080 cpu : usr=1.40%, sys=3.69%, ctx=3845, majf=0, minf=2 00:10:08.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.080 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.080 issued rwts: total=3837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.080 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2855435: Tue Nov 5 04:21:21 2024 00:10:08.080 read: IOPS=25, BW=101KiB/s (104kB/s)(284KiB/2801msec) 00:10:08.080 slat (usec): min=9, max=19806, avg=300.83, stdev=2331.14 00:10:08.080 clat (usec): min=640, max=42050, avg=38836.08, stdev=9468.07 00:10:08.080 lat (usec): min=670, max=61003, avg=39140.77, stdev=9822.87 00:10:08.080 clat percentiles (usec): 00:10:08.080 | 1.00th=[ 644], 5.00th=[ 947], 10.00th=[40633], 20.00th=[41157], 00:10:08.080 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:08.080 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:08.080 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:08.080 | 99.99th=[42206] 00:10:08.080 bw ( KiB/s): min= 96, max= 112, per=1.01%, avg=102.40, stdev= 6.69, samples=5 00:10:08.080 iops : min= 24, max= 28, avg=25.60, stdev= 1.67, samples=5 00:10:08.080 lat (usec) : 750=2.78%, 1000=2.78% 00:10:08.080 lat (msec) : 50=93.06% 00:10:08.080 cpu : usr=0.11%, sys=0.00%, ctx=73, majf=0, minf=2 00:10:08.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.080 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.080 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.080 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2855436: Tue Nov 5 04:21:21 2024 00:10:08.080 read: IOPS=65, BW=260KiB/s (266kB/s)(676KiB/2604msec) 00:10:08.080 slat (nsec): min=10932, max=45079, avg=26019.96, stdev=4754.41 00:10:08.080 clat (usec): min=650, max=41896, avg=15217.77, stdev=19197.87 00:10:08.080 lat (usec): min=677, max=41922, avg=15243.78, stdev=19195.67 00:10:08.080 clat percentiles (usec): 00:10:08.080 | 1.00th=[ 775], 5.00th=[ 898], 10.00th=[ 955], 20.00th=[ 996], 00:10:08.080 | 30.00th=[ 1020], 40.00th=[ 1037], 50.00th=[ 1074], 60.00th=[ 1123], 00:10:08.080 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:08.080 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:08.080 | 99.99th=[41681] 00:10:08.080 bw ( KiB/s): min= 96, max= 392, per=1.54%, avg=156.80, stdev=131.53, samples=5 00:10:08.080 iops : min= 24, max= 98, avg=39.20, stdev=32.88, samples=5 00:10:08.080 lat (usec) : 750=0.59%, 1000=20.00% 00:10:08.080 lat (msec) : 2=43.53%, 50=35.29% 00:10:08.080 cpu : usr=0.00%, sys=0.35%, ctx=170, majf=0, minf=2 00:10:08.080 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:08.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.080 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:08.080 issued rwts: total=170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:08.080 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:08.080 00:10:08.080 Run status group 0 (all jobs): 00:10:08.080 READ: bw=9.89MiB/s (10.4MB/s), 101KiB/s-5231KiB/s (104kB/s-5357kB/s), io=31.1MiB (32.6MB), run=2604-3144msec 00:10:08.080 00:10:08.080 Disk stats (read/write): 00:10:08.080 nvme0n1: ios=3752/0, merge=0/0, ticks=2616/0, in_queue=2616, util=93.12% 00:10:08.080 nvme0n2: ios=3777/0, merge=0/0, ticks=2790/0, in_queue=2790, util=93.71% 00:10:08.080 nvme0n3: ios=66/0, merge=0/0, ticks=2555/0, in_queue=2555, util=96.03% 00:10:08.080 nvme0n4: ios=169/0, merge=0/0, ticks=2566/0, in_queue=2566, util=96.46% 00:10:08.340 04:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.340 04:21:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:08.601 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.601 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:08.601 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.601 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:08.862 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:08.862 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2855244 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:09.123 nvmf hotplug test: fio failed as expected 00:10:09.123 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.384 rmmod nvme_tcp 00:10:09.384 rmmod nvme_fabrics 00:10:09.384 rmmod nvme_keyring 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2851202 ']' 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2851202 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2851202 ']' 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2851202 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:09.384 04:21:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2851202 00:10:09.645 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2851202' 00:10:09.646 killing process with pid 2851202 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2851202 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2851202 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.646 04:21:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.195 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:12.195 00:10:12.195 real 0m28.704s 00:10:12.195 user 2m41.295s 00:10:12.195 sys 0m9.283s 00:10:12.195 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:12.195 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.196 ************************************ 00:10:12.196 END TEST nvmf_fio_target 00:10:12.196 ************************************ 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:12.196 ************************************ 00:10:12.196 START TEST nvmf_bdevio 00:10:12.196 ************************************ 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:12.196 * Looking for test storage... 00:10:12.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:12.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.196 --rc genhtml_branch_coverage=1 00:10:12.196 --rc genhtml_function_coverage=1 00:10:12.196 --rc genhtml_legend=1 00:10:12.196 --rc geninfo_all_blocks=1 00:10:12.196 --rc geninfo_unexecuted_blocks=1 00:10:12.196 00:10:12.196 ' 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:12.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.196 --rc genhtml_branch_coverage=1 00:10:12.196 --rc genhtml_function_coverage=1 00:10:12.196 --rc genhtml_legend=1 00:10:12.196 --rc geninfo_all_blocks=1 00:10:12.196 --rc geninfo_unexecuted_blocks=1 00:10:12.196 00:10:12.196 ' 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:12.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.196 --rc genhtml_branch_coverage=1 00:10:12.196 --rc genhtml_function_coverage=1 00:10:12.196 --rc genhtml_legend=1 00:10:12.196 --rc geninfo_all_blocks=1 00:10:12.196 --rc geninfo_unexecuted_blocks=1 00:10:12.196 00:10:12.196 ' 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:12.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.196 --rc genhtml_branch_coverage=1 00:10:12.196 --rc genhtml_function_coverage=1 00:10:12.196 --rc genhtml_legend=1 00:10:12.196 --rc geninfo_all_blocks=1 00:10:12.196 --rc geninfo_unexecuted_blocks=1 00:10:12.196 00:10:12.196 ' 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.196 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:12.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:12.197 04:21:25 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:20.350 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:20.350 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:20.350 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:20.350 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:20.350 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:20.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:10:20.351 00:10:20.351 --- 10.0.0.2 ping statistics --- 00:10:20.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.351 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:20.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:10:20.351 00:10:20.351 --- 10.0.0.1 ping statistics --- 00:10:20.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.351 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2860548 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2860548 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2860548 ']' 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:20.351 04:21:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 [2024-11-05 04:21:32.952491] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:20.351 [2024-11-05 04:21:32.952555] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.351 [2024-11-05 04:21:33.054259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:20.351 [2024-11-05 04:21:33.107325] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.351 [2024-11-05 04:21:33.107385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.351 [2024-11-05 04:21:33.107394] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.351 [2024-11-05 04:21:33.107401] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.351 [2024-11-05 04:21:33.107413] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.351 [2024-11-05 04:21:33.109825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:20.351 [2024-11-05 04:21:33.109991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:20.351 [2024-11-05 04:21:33.110152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:20.351 [2024-11-05 04:21:33.110154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 [2024-11-05 04:21:33.837814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 Malloc0 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:20.351 [2024-11-05 04:21:33.911614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:20.351 { 00:10:20.351 "params": { 00:10:20.351 "name": "Nvme$subsystem", 00:10:20.351 "trtype": "$TEST_TRANSPORT", 00:10:20.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.351 "adrfam": "ipv4", 00:10:20.351 "trsvcid": "$NVMF_PORT", 00:10:20.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.351 "hdgst": ${hdgst:-false}, 00:10:20.351 "ddgst": ${ddgst:-false} 00:10:20.351 }, 00:10:20.351 "method": "bdev_nvme_attach_controller" 00:10:20.351 } 00:10:20.351 EOF 00:10:20.351 )") 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:20.351 04:21:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:20.351 "params": { 00:10:20.351 "name": "Nvme1", 00:10:20.351 "trtype": "tcp", 00:10:20.351 "traddr": "10.0.0.2", 00:10:20.351 "adrfam": "ipv4", 00:10:20.351 "trsvcid": "4420", 00:10:20.351 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.351 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.351 "hdgst": false, 00:10:20.351 "ddgst": false 00:10:20.351 }, 00:10:20.351 "method": "bdev_nvme_attach_controller" 00:10:20.351 }' 00:10:20.351 [2024-11-05 04:21:33.970612] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:20.351 [2024-11-05 04:21:33.970683] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2860841 ] 00:10:20.612 [2024-11-05 04:21:34.048784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:20.612 [2024-11-05 04:21:34.093574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.612 [2024-11-05 04:21:34.093691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:20.612 [2024-11-05 04:21:34.093695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.873 I/O targets: 00:10:20.874 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:20.874 00:10:20.874 00:10:20.874 CUnit - A unit testing framework for C - Version 2.1-3 00:10:20.874 http://cunit.sourceforge.net/ 00:10:20.874 00:10:20.874 00:10:20.874 Suite: bdevio tests on: Nvme1n1 00:10:20.874 Test: blockdev write read block ...passed 00:10:20.874 Test: blockdev write zeroes read block ...passed 00:10:20.874 Test: blockdev write zeroes read no split ...passed 00:10:21.135 Test: blockdev write zeroes read split ...passed 00:10:21.135 Test: blockdev write zeroes read split partial ...passed 00:10:21.135 Test: blockdev reset ...[2024-11-05 04:21:34.571604] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:21.135 [2024-11-05 04:21:34.571676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf0970 (9): Bad file descriptor 00:10:21.135 [2024-11-05 04:21:34.583739] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:21.135 passed 00:10:21.135 Test: blockdev write read 8 blocks ...passed 00:10:21.135 Test: blockdev write read size > 128k ...passed 00:10:21.135 Test: blockdev write read invalid size ...passed 00:10:21.135 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:21.135 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:21.135 Test: blockdev write read max offset ...passed 00:10:21.135 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:21.135 Test: blockdev writev readv 8 blocks ...passed 00:10:21.135 Test: blockdev writev readv 30 x 1block ...passed 00:10:21.396 Test: blockdev writev readv block ...passed 00:10:21.396 Test: blockdev writev readv size > 128k ...passed 00:10:21.396 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:21.396 Test: blockdev comparev and writev ...[2024-11-05 04:21:34.804171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.396 [2024-11-05 04:21:34.804197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.804208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.396 [2024-11-05 04:21:34.804215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.804581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.396 [2024-11-05 04:21:34.804590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.804599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.396 [2024-11-05 04:21:34.804605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.804997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.396 [2024-11-05 04:21:34.805006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.805015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.396 [2024-11-05 04:21:34.805020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.805382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.396 [2024-11-05 04:21:34.805389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.805398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:21.396 [2024-11-05 04:21:34.805404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:21.396 passed 00:10:21.396 Test: blockdev nvme passthru rw ...passed 00:10:21.396 Test: blockdev nvme passthru vendor specific ...[2024-11-05 04:21:34.890281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.396 [2024-11-05 04:21:34.890292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.890526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.396 [2024-11-05 04:21:34.890533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.890774] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.396 [2024-11-05 04:21:34.890782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:21.396 [2024-11-05 04:21:34.891023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:21.396 [2024-11-05 04:21:34.891030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:21.396 passed 00:10:21.396 Test: blockdev nvme admin passthru ...passed 00:10:21.396 Test: blockdev copy ...passed 00:10:21.396 00:10:21.396 Run Summary: Type Total Ran Passed Failed Inactive 00:10:21.396 suites 1 1 n/a 0 0 00:10:21.396 tests 23 23 23 0 0 00:10:21.396 asserts 152 152 152 0 n/a 00:10:21.396 00:10:21.396 Elapsed time = 1.090 seconds 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:21.658 rmmod nvme_tcp 00:10:21.658 rmmod nvme_fabrics 00:10:21.658 rmmod nvme_keyring 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2860548 ']' 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2860548 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2860548 ']' 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2860548 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2860548 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2860548' 00:10:21.658 killing process with pid 2860548 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2860548 00:10:21.658 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2860548 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.918 04:21:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.012 04:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:24.012 00:10:24.012 real 0m12.103s 00:10:24.012 user 0m13.387s 00:10:24.012 sys 0m6.088s 00:10:24.012 04:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:24.012 04:21:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:24.012 ************************************ 00:10:24.012 END TEST nvmf_bdevio 00:10:24.012 ************************************ 00:10:24.012 04:21:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:24.012 00:10:24.012 real 4m59.983s 00:10:24.012 user 11m56.597s 00:10:24.012 sys 1m47.266s 00:10:24.012 04:21:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:24.012 04:21:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:24.012 ************************************ 00:10:24.012 END TEST nvmf_target_core 00:10:24.012 ************************************ 00:10:24.012 04:21:37 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:24.012 04:21:37 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:24.012 04:21:37 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:24.012 04:21:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:24.013 ************************************ 00:10:24.013 START TEST nvmf_target_extra 00:10:24.013 ************************************ 00:10:24.013 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:24.013 * Looking for test storage... 00:10:24.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:24.013 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:24.013 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:10:24.013 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:24.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.274 --rc genhtml_branch_coverage=1 00:10:24.274 --rc genhtml_function_coverage=1 00:10:24.274 --rc genhtml_legend=1 00:10:24.274 --rc geninfo_all_blocks=1 00:10:24.274 --rc geninfo_unexecuted_blocks=1 00:10:24.274 00:10:24.274 ' 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:24.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.274 --rc genhtml_branch_coverage=1 00:10:24.274 --rc genhtml_function_coverage=1 00:10:24.274 --rc genhtml_legend=1 00:10:24.274 --rc geninfo_all_blocks=1 00:10:24.274 --rc geninfo_unexecuted_blocks=1 00:10:24.274 00:10:24.274 ' 00:10:24.274 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:24.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.274 --rc genhtml_branch_coverage=1 00:10:24.274 --rc genhtml_function_coverage=1 00:10:24.274 --rc genhtml_legend=1 00:10:24.274 --rc geninfo_all_blocks=1 00:10:24.274 --rc geninfo_unexecuted_blocks=1 00:10:24.274 00:10:24.275 ' 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:24.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.275 --rc genhtml_branch_coverage=1 00:10:24.275 --rc genhtml_function_coverage=1 00:10:24.275 --rc genhtml_legend=1 00:10:24.275 --rc geninfo_all_blocks=1 00:10:24.275 --rc geninfo_unexecuted_blocks=1 00:10:24.275 00:10:24.275 ' 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:24.275 ************************************ 00:10:24.275 START TEST nvmf_example 00:10:24.275 ************************************ 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:24.275 * Looking for test storage... 00:10:24.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:10:24.275 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:24.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.539 --rc genhtml_branch_coverage=1 00:10:24.539 --rc genhtml_function_coverage=1 00:10:24.539 --rc genhtml_legend=1 00:10:24.539 --rc geninfo_all_blocks=1 00:10:24.539 --rc geninfo_unexecuted_blocks=1 00:10:24.539 00:10:24.539 ' 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:24.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.539 --rc genhtml_branch_coverage=1 00:10:24.539 --rc genhtml_function_coverage=1 00:10:24.539 --rc genhtml_legend=1 00:10:24.539 --rc geninfo_all_blocks=1 00:10:24.539 --rc geninfo_unexecuted_blocks=1 00:10:24.539 00:10:24.539 ' 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:24.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.539 --rc genhtml_branch_coverage=1 00:10:24.539 --rc genhtml_function_coverage=1 00:10:24.539 --rc genhtml_legend=1 00:10:24.539 --rc geninfo_all_blocks=1 00:10:24.539 --rc geninfo_unexecuted_blocks=1 00:10:24.539 00:10:24.539 ' 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:24.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:24.539 --rc genhtml_branch_coverage=1 00:10:24.539 --rc genhtml_function_coverage=1 00:10:24.539 --rc genhtml_legend=1 00:10:24.539 --rc geninfo_all_blocks=1 00:10:24.539 --rc geninfo_unexecuted_blocks=1 00:10:24.539 00:10:24.539 ' 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.539 04:21:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:24.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:24.539 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:24.540 04:21:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:32.703 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:32.703 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:32.703 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.703 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:32.704 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:10:32.704 00:10:32.704 --- 10.0.0.2 ping statistics --- 00:10:32.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.704 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:10:32.704 00:10:32.704 --- 10.0.0.1 ping statistics --- 00:10:32.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.704 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2865468 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2865468 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 2865468 ']' 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:32.704 04:21:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.704 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:32.966 04:21:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:45.208 Initializing NVMe Controllers 00:10:45.208 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:45.208 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:45.208 Initialization complete. Launching workers. 00:10:45.208 ======================================================== 00:10:45.208 Latency(us) 00:10:45.208 Device Information : IOPS MiB/s Average min max 00:10:45.208 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19266.00 75.26 3323.17 594.53 16351.69 00:10:45.208 ======================================================== 00:10:45.208 Total : 19266.00 75.26 3323.17 594.53 16351.69 00:10:45.208 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:45.208 rmmod nvme_tcp 00:10:45.208 rmmod nvme_fabrics 00:10:45.208 rmmod nvme_keyring 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2865468 ']' 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2865468 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 2865468 ']' 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 2865468 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2865468 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2865468' 00:10:45.208 killing process with pid 2865468 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 2865468 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 2865468 00:10:45.208 nvmf threads initialize successfully 00:10:45.208 bdev subsystem init successfully 00:10:45.208 created a nvmf target service 00:10:45.208 create targets's poll groups done 00:10:45.208 all subsystems of target started 00:10:45.208 nvmf target is running 00:10:45.208 all subsystems of target stopped 00:10:45.208 destroy targets's poll groups done 00:10:45.208 destroyed the nvmf target service 00:10:45.208 bdev subsystem finish successfully 00:10:45.208 nvmf threads destroy successfully 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.208 04:21:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.470 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:45.470 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:45.470 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:45.470 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.470 00:10:45.470 real 0m21.301s 00:10:45.470 user 0m46.773s 00:10:45.470 sys 0m6.847s 00:10:45.470 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:45.470 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:45.470 ************************************ 00:10:45.470 END TEST nvmf_example 00:10:45.470 ************************************ 00:10:45.732 04:21:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:45.732 04:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:45.732 04:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:45.732 04:21:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:45.732 ************************************ 00:10:45.732 START TEST nvmf_filesystem 00:10:45.732 ************************************ 00:10:45.732 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:45.732 * Looking for test storage... 00:10:45.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.732 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:45.732 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:45.732 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:45.732 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:45.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.997 --rc genhtml_branch_coverage=1 00:10:45.997 --rc genhtml_function_coverage=1 00:10:45.997 --rc genhtml_legend=1 00:10:45.997 --rc geninfo_all_blocks=1 00:10:45.997 --rc geninfo_unexecuted_blocks=1 00:10:45.997 00:10:45.997 ' 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:45.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.997 --rc genhtml_branch_coverage=1 00:10:45.997 --rc genhtml_function_coverage=1 00:10:45.997 --rc genhtml_legend=1 00:10:45.997 --rc geninfo_all_blocks=1 00:10:45.997 --rc geninfo_unexecuted_blocks=1 00:10:45.997 00:10:45.997 ' 00:10:45.997 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:45.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.997 --rc genhtml_branch_coverage=1 00:10:45.997 --rc genhtml_function_coverage=1 00:10:45.997 --rc genhtml_legend=1 00:10:45.997 --rc geninfo_all_blocks=1 00:10:45.997 --rc geninfo_unexecuted_blocks=1 00:10:45.997 00:10:45.997 ' 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:45.998 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:45.998 --rc genhtml_branch_coverage=1 00:10:45.998 --rc genhtml_function_coverage=1 00:10:45.998 --rc genhtml_legend=1 00:10:45.998 --rc geninfo_all_blocks=1 00:10:45.998 --rc geninfo_unexecuted_blocks=1 00:10:45.998 00:10:45.998 ' 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:45.998 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:45.999 #define SPDK_CONFIG_H 00:10:45.999 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:45.999 #define SPDK_CONFIG_APPS 1 00:10:45.999 #define SPDK_CONFIG_ARCH native 00:10:45.999 #undef SPDK_CONFIG_ASAN 00:10:45.999 #undef SPDK_CONFIG_AVAHI 00:10:45.999 #undef SPDK_CONFIG_CET 00:10:45.999 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:45.999 #define SPDK_CONFIG_COVERAGE 1 00:10:45.999 #define SPDK_CONFIG_CROSS_PREFIX 00:10:45.999 #undef SPDK_CONFIG_CRYPTO 00:10:45.999 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:45.999 #undef SPDK_CONFIG_CUSTOMOCF 00:10:45.999 #undef SPDK_CONFIG_DAOS 00:10:45.999 #define SPDK_CONFIG_DAOS_DIR 00:10:45.999 #define SPDK_CONFIG_DEBUG 1 00:10:45.999 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:45.999 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:45.999 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:45.999 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:45.999 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:45.999 #undef SPDK_CONFIG_DPDK_UADK 00:10:45.999 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:45.999 #define SPDK_CONFIG_EXAMPLES 1 00:10:45.999 #undef SPDK_CONFIG_FC 00:10:45.999 #define SPDK_CONFIG_FC_PATH 00:10:45.999 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:45.999 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:45.999 #define SPDK_CONFIG_FSDEV 1 00:10:45.999 #undef SPDK_CONFIG_FUSE 00:10:45.999 #undef SPDK_CONFIG_FUZZER 00:10:45.999 #define SPDK_CONFIG_FUZZER_LIB 00:10:45.999 #undef SPDK_CONFIG_GOLANG 00:10:45.999 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:45.999 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:45.999 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:45.999 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:45.999 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:45.999 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:45.999 #undef SPDK_CONFIG_HAVE_LZ4 00:10:45.999 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:45.999 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:45.999 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:45.999 #define SPDK_CONFIG_IDXD 1 00:10:45.999 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:45.999 #undef SPDK_CONFIG_IPSEC_MB 00:10:45.999 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:45.999 #define SPDK_CONFIG_ISAL 1 00:10:45.999 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:45.999 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:45.999 #define SPDK_CONFIG_LIBDIR 00:10:45.999 #undef SPDK_CONFIG_LTO 00:10:45.999 #define SPDK_CONFIG_MAX_LCORES 128 00:10:45.999 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:45.999 #define SPDK_CONFIG_NVME_CUSE 1 00:10:45.999 #undef SPDK_CONFIG_OCF 00:10:45.999 #define SPDK_CONFIG_OCF_PATH 00:10:45.999 #define SPDK_CONFIG_OPENSSL_PATH 00:10:45.999 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:45.999 #define SPDK_CONFIG_PGO_DIR 00:10:45.999 #undef SPDK_CONFIG_PGO_USE 00:10:45.999 #define SPDK_CONFIG_PREFIX /usr/local 00:10:45.999 #undef SPDK_CONFIG_RAID5F 00:10:45.999 #undef SPDK_CONFIG_RBD 00:10:45.999 #define SPDK_CONFIG_RDMA 1 00:10:45.999 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:45.999 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:45.999 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:45.999 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:45.999 #define SPDK_CONFIG_SHARED 1 00:10:45.999 #undef SPDK_CONFIG_SMA 00:10:45.999 #define SPDK_CONFIG_TESTS 1 00:10:45.999 #undef SPDK_CONFIG_TSAN 00:10:45.999 #define SPDK_CONFIG_UBLK 1 00:10:45.999 #define SPDK_CONFIG_UBSAN 1 00:10:45.999 #undef SPDK_CONFIG_UNIT_TESTS 00:10:45.999 #undef SPDK_CONFIG_URING 00:10:45.999 #define SPDK_CONFIG_URING_PATH 00:10:45.999 #undef SPDK_CONFIG_URING_ZNS 00:10:45.999 #undef SPDK_CONFIG_USDT 00:10:45.999 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:45.999 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:45.999 #define SPDK_CONFIG_VFIO_USER 1 00:10:45.999 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:45.999 #define SPDK_CONFIG_VHOST 1 00:10:45.999 #define SPDK_CONFIG_VIRTIO 1 00:10:45.999 #undef SPDK_CONFIG_VTUNE 00:10:45.999 #define SPDK_CONFIG_VTUNE_DIR 00:10:45.999 #define SPDK_CONFIG_WERROR 1 00:10:45.999 #define SPDK_CONFIG_WPDK_DIR 00:10:45.999 #undef SPDK_CONFIG_XNVME 00:10:45.999 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:45.999 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:46.000 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:46.001 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j144 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2868313 ]] 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2868313 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.jwxR2M 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.jwxR2M/tests/target /tmp/spdk.jwxR2M 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=122534776832 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=129356541952 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=6821765120 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64668237824 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678268928 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=10031104 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=25847947264 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=25871310848 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=23363584 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=efivarfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=efivarfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=216064 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=507904 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=287744 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=64677556224 00:10:46.002 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=64678273024 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=716800 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12935639040 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12935651328 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:10:46.003 * Looking for test storage... 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=122534776832 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=9036357632 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.003 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.265 --rc genhtml_branch_coverage=1 00:10:46.265 --rc genhtml_function_coverage=1 00:10:46.265 --rc genhtml_legend=1 00:10:46.265 --rc geninfo_all_blocks=1 00:10:46.265 --rc geninfo_unexecuted_blocks=1 00:10:46.265 00:10:46.265 ' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.265 --rc genhtml_branch_coverage=1 00:10:46.265 --rc genhtml_function_coverage=1 00:10:46.265 --rc genhtml_legend=1 00:10:46.265 --rc geninfo_all_blocks=1 00:10:46.265 --rc geninfo_unexecuted_blocks=1 00:10:46.265 00:10:46.265 ' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.265 --rc genhtml_branch_coverage=1 00:10:46.265 --rc genhtml_function_coverage=1 00:10:46.265 --rc genhtml_legend=1 00:10:46.265 --rc geninfo_all_blocks=1 00:10:46.265 --rc geninfo_unexecuted_blocks=1 00:10:46.265 00:10:46.265 ' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:46.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:46.265 --rc genhtml_branch_coverage=1 00:10:46.265 --rc genhtml_function_coverage=1 00:10:46.265 --rc genhtml_legend=1 00:10:46.265 --rc geninfo_all_blocks=1 00:10:46.265 --rc geninfo_unexecuted_blocks=1 00:10:46.265 00:10:46.265 ' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:46.265 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:46.265 04:21:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.415 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:54.416 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:54.416 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:54.416 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:54.416 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:54.416 04:22:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:54.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:10:54.416 00:10:54.416 --- 10.0.0.2 ping statistics --- 00:10:54.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.416 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:10:54.416 00:10:54.416 --- 10.0.0.1 ping statistics --- 00:10:54.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.416 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.416 ************************************ 00:10:54.416 START TEST nvmf_filesystem_no_in_capsule 00:10:54.416 ************************************ 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:54.416 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2871998 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2871998 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2871998 ']' 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:54.417 04:22:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.417 [2024-11-05 04:22:07.227633] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:10:54.417 [2024-11-05 04:22:07.227694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.417 [2024-11-05 04:22:07.309683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.417 [2024-11-05 04:22:07.351940] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.417 [2024-11-05 04:22:07.351974] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.417 [2024-11-05 04:22:07.351983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.417 [2024-11-05 04:22:07.351990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.417 [2024-11-05 04:22:07.351995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.417 [2024-11-05 04:22:07.353502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.417 [2024-11-05 04:22:07.353621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.417 [2024-11-05 04:22:07.353812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.417 [2024-11-05 04:22:07.353813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.417 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:54.417 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:10:54.417 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:54.417 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.417 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.678 [2024-11-05 04:22:08.076676] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.678 Malloc1 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.678 [2024-11-05 04:22:08.209302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.678 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:10:54.678 { 00:10:54.678 "name": "Malloc1", 00:10:54.679 "aliases": [ 00:10:54.679 "06a6125a-844a-4792-aeea-8bec359b9137" 00:10:54.679 ], 00:10:54.679 "product_name": "Malloc disk", 00:10:54.679 "block_size": 512, 00:10:54.679 "num_blocks": 1048576, 00:10:54.679 "uuid": "06a6125a-844a-4792-aeea-8bec359b9137", 00:10:54.679 "assigned_rate_limits": { 00:10:54.679 "rw_ios_per_sec": 0, 00:10:54.679 "rw_mbytes_per_sec": 0, 00:10:54.679 "r_mbytes_per_sec": 0, 00:10:54.679 "w_mbytes_per_sec": 0 00:10:54.679 }, 00:10:54.679 "claimed": true, 00:10:54.679 "claim_type": "exclusive_write", 00:10:54.679 "zoned": false, 00:10:54.679 "supported_io_types": { 00:10:54.679 "read": true, 00:10:54.679 "write": true, 00:10:54.679 "unmap": true, 00:10:54.679 "flush": true, 00:10:54.679 "reset": true, 00:10:54.679 "nvme_admin": false, 00:10:54.679 "nvme_io": false, 00:10:54.679 "nvme_io_md": false, 00:10:54.679 "write_zeroes": true, 00:10:54.679 "zcopy": true, 00:10:54.679 "get_zone_info": false, 00:10:54.679 "zone_management": false, 00:10:54.679 "zone_append": false, 00:10:54.679 "compare": false, 00:10:54.679 "compare_and_write": false, 00:10:54.679 "abort": true, 00:10:54.679 "seek_hole": false, 00:10:54.679 "seek_data": false, 00:10:54.679 "copy": true, 00:10:54.679 "nvme_iov_md": false 00:10:54.679 }, 00:10:54.679 "memory_domains": [ 00:10:54.679 { 00:10:54.679 "dma_device_id": "system", 00:10:54.679 "dma_device_type": 1 00:10:54.679 }, 00:10:54.679 { 00:10:54.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.679 "dma_device_type": 2 00:10:54.679 } 00:10:54.679 ], 00:10:54.679 "driver_specific": {} 00:10:54.679 } 00:10:54.679 ]' 00:10:54.679 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:10:54.679 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:10:54.679 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:10:54.939 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:10:54.939 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:10:54.939 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:10:54.939 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:54.939 04:22:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:56.325 04:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:56.325 04:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:10:56.325 04:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:10:56.325 04:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:10:56.325 04:22:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:58.890 04:22:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:58.890 04:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:59.462 04:22:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:00.404 ************************************ 00:11:00.404 START TEST filesystem_ext4 00:11:00.404 ************************************ 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:00.404 04:22:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:00.404 mke2fs 1.47.0 (5-Feb-2023) 00:11:00.404 Discarding device blocks: 0/522240 done 00:11:00.404 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:00.404 Filesystem UUID: 944b21e9-36d7-46f3-b960-4fbfb82b782b 00:11:00.404 Superblock backups stored on blocks: 00:11:00.404 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:00.404 00:11:00.404 Allocating group tables: 0/64 done 00:11:00.404 Writing inode tables: 0/64 done 00:11:00.667 Creating journal (8192 blocks): done 00:11:00.667 Writing superblocks and filesystem accounting information: 0/64 done 00:11:00.667 00:11:00.667 04:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:00.667 04:22:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:05.958 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:05.958 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:05.958 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:05.958 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:05.958 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:05.958 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2871998 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.220 00:11:06.220 real 0m5.683s 00:11:06.220 user 0m0.029s 00:11:06.220 sys 0m0.073s 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:06.220 ************************************ 00:11:06.220 END TEST filesystem_ext4 00:11:06.220 ************************************ 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.220 ************************************ 00:11:06.220 START TEST filesystem_btrfs 00:11:06.220 ************************************ 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:06.220 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:06.221 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:06.221 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:06.221 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:06.221 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:06.486 btrfs-progs v6.8.1 00:11:06.486 See https://btrfs.readthedocs.io for more information. 00:11:06.486 00:11:06.486 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:06.486 NOTE: several default settings have changed in version 5.15, please make sure 00:11:06.486 this does not affect your deployments: 00:11:06.486 - DUP for metadata (-m dup) 00:11:06.486 - enabled no-holes (-O no-holes) 00:11:06.486 - enabled free-space-tree (-R free-space-tree) 00:11:06.486 00:11:06.486 Label: (null) 00:11:06.486 UUID: 8ca8b5f6-01ad-4502-b13a-70af4c17accb 00:11:06.486 Node size: 16384 00:11:06.486 Sector size: 4096 (CPU page size: 4096) 00:11:06.486 Filesystem size: 510.00MiB 00:11:06.486 Block group profiles: 00:11:06.486 Data: single 8.00MiB 00:11:06.486 Metadata: DUP 32.00MiB 00:11:06.486 System: DUP 8.00MiB 00:11:06.486 SSD detected: yes 00:11:06.486 Zoned device: no 00:11:06.486 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:06.486 Checksum: crc32c 00:11:06.486 Number of devices: 1 00:11:06.486 Devices: 00:11:06.486 ID SIZE PATH 00:11:06.486 1 510.00MiB /dev/nvme0n1p1 00:11:06.486 00:11:06.486 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:06.486 04:22:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.436 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.436 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:07.436 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.436 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2871998 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.697 00:11:07.697 real 0m1.426s 00:11:07.697 user 0m0.030s 00:11:07.697 sys 0m0.121s 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.697 ************************************ 00:11:07.697 END TEST filesystem_btrfs 00:11:07.697 ************************************ 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.697 ************************************ 00:11:07.697 START TEST filesystem_xfs 00:11:07.697 ************************************ 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:07.697 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:07.698 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:11:07.698 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:07.698 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:07.698 04:22:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:07.698 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:07.698 = sectsz=512 attr=2, projid32bit=1 00:11:07.698 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:07.698 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:07.698 data = bsize=4096 blocks=130560, imaxpct=25 00:11:07.698 = sunit=0 swidth=0 blks 00:11:07.698 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:07.698 log =internal log bsize=4096 blocks=16384, version=2 00:11:07.698 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:07.698 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:08.641 Discarding blocks...Done. 00:11:08.641 04:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:08.641 04:22:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.555 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2871998 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.816 00:11:10.816 real 0m3.056s 00:11:10.816 user 0m0.028s 00:11:10.816 sys 0m0.075s 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.816 ************************************ 00:11:10.816 END TEST filesystem_xfs 00:11:10.816 ************************************ 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:10.816 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2871998 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2871998 ']' 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2871998 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2871998 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2871998' 00:11:11.077 killing process with pid 2871998 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 2871998 00:11:11.077 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 2871998 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:11.338 00:11:11.338 real 0m17.676s 00:11:11.338 user 1m9.872s 00:11:11.338 sys 0m1.385s 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.338 ************************************ 00:11:11.338 END TEST nvmf_filesystem_no_in_capsule 00:11:11.338 ************************************ 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:11.338 ************************************ 00:11:11.338 START TEST nvmf_filesystem_in_capsule 00:11:11.338 ************************************ 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2875628 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2875628 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2875628 ']' 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.338 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:11.339 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:11.339 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:11.339 04:22:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.599 [2024-11-05 04:22:24.977811] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:11.599 [2024-11-05 04:22:24.977864] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:11.599 [2024-11-05 04:22:25.057164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:11.599 [2024-11-05 04:22:25.097095] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:11.599 [2024-11-05 04:22:25.097133] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:11.599 [2024-11-05 04:22:25.097142] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:11.600 [2024-11-05 04:22:25.097149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:11.600 [2024-11-05 04:22:25.097155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:11.600 [2024-11-05 04:22:25.098715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:11.600 [2024-11-05 04:22:25.098862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:11.600 [2024-11-05 04:22:25.099134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:11.600 [2024-11-05 04:22:25.099135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.171 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:12.171 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:11:12.171 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:12.171 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:12.171 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.431 [2024-11-05 04:22:25.833720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.431 Malloc1 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.431 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.432 [2024-11-05 04:22:25.957393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:11:12.432 { 00:11:12.432 "name": "Malloc1", 00:11:12.432 "aliases": [ 00:11:12.432 "74f12a46-cd37-44a3-9b4e-1cd1e3fd1e09" 00:11:12.432 ], 00:11:12.432 "product_name": "Malloc disk", 00:11:12.432 "block_size": 512, 00:11:12.432 "num_blocks": 1048576, 00:11:12.432 "uuid": "74f12a46-cd37-44a3-9b4e-1cd1e3fd1e09", 00:11:12.432 "assigned_rate_limits": { 00:11:12.432 "rw_ios_per_sec": 0, 00:11:12.432 "rw_mbytes_per_sec": 0, 00:11:12.432 "r_mbytes_per_sec": 0, 00:11:12.432 "w_mbytes_per_sec": 0 00:11:12.432 }, 00:11:12.432 "claimed": true, 00:11:12.432 "claim_type": "exclusive_write", 00:11:12.432 "zoned": false, 00:11:12.432 "supported_io_types": { 00:11:12.432 "read": true, 00:11:12.432 "write": true, 00:11:12.432 "unmap": true, 00:11:12.432 "flush": true, 00:11:12.432 "reset": true, 00:11:12.432 "nvme_admin": false, 00:11:12.432 "nvme_io": false, 00:11:12.432 "nvme_io_md": false, 00:11:12.432 "write_zeroes": true, 00:11:12.432 "zcopy": true, 00:11:12.432 "get_zone_info": false, 00:11:12.432 "zone_management": false, 00:11:12.432 "zone_append": false, 00:11:12.432 "compare": false, 00:11:12.432 "compare_and_write": false, 00:11:12.432 "abort": true, 00:11:12.432 "seek_hole": false, 00:11:12.432 "seek_data": false, 00:11:12.432 "copy": true, 00:11:12.432 "nvme_iov_md": false 00:11:12.432 }, 00:11:12.432 "memory_domains": [ 00:11:12.432 { 00:11:12.432 "dma_device_id": "system", 00:11:12.432 "dma_device_type": 1 00:11:12.432 }, 00:11:12.432 { 00:11:12.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.432 "dma_device_type": 2 00:11:12.432 } 00:11:12.432 ], 00:11:12.432 "driver_specific": {} 00:11:12.432 } 00:11:12.432 ]' 00:11:12.432 04:22:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:11:12.432 04:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:11:12.432 04:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:11:12.692 04:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:11:12.692 04:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:11:12.692 04:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:11:12.692 04:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:12.692 04:22:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:14.076 04:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:14.076 04:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:11:14.076 04:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:11:14.077 04:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:11:14.077 04:22:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:11:15.990 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:11:15.990 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:11:15.990 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:11:15.990 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:11:15.990 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:11:15.990 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:11:15.990 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:15.991 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:16.252 04:22:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:16.825 04:22:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:17.768 ************************************ 00:11:17.768 START TEST filesystem_in_capsule_ext4 00:11:17.768 ************************************ 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:11:17.768 04:22:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:17.768 mke2fs 1.47.0 (5-Feb-2023) 00:11:17.768 Discarding device blocks: 0/522240 done 00:11:18.028 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:18.028 Filesystem UUID: 58940433-8312-4d1a-bc1f-59cb3726022b 00:11:18.028 Superblock backups stored on blocks: 00:11:18.028 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:18.028 00:11:18.028 Allocating group tables: 0/64 done 00:11:18.028 Writing inode tables: 0/64 done 00:11:21.330 Creating journal (8192 blocks): done 00:11:21.330 Writing superblocks and filesystem accounting information: 0/64 done 00:11:21.330 00:11:21.330 04:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:11:21.330 04:22:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:26.620 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2875628 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:26.880 00:11:26.880 real 0m9.009s 00:11:26.880 user 0m0.035s 00:11:26.880 sys 0m0.074s 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:26.880 ************************************ 00:11:26.880 END TEST filesystem_in_capsule_ext4 00:11:26.880 ************************************ 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.880 ************************************ 00:11:26.880 START TEST filesystem_in_capsule_btrfs 00:11:26.880 ************************************ 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:11:26.880 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:27.142 btrfs-progs v6.8.1 00:11:27.142 See https://btrfs.readthedocs.io for more information. 00:11:27.142 00:11:27.142 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:27.142 NOTE: several default settings have changed in version 5.15, please make sure 00:11:27.142 this does not affect your deployments: 00:11:27.142 - DUP for metadata (-m dup) 00:11:27.142 - enabled no-holes (-O no-holes) 00:11:27.142 - enabled free-space-tree (-R free-space-tree) 00:11:27.142 00:11:27.142 Label: (null) 00:11:27.142 UUID: a0a75bf0-dfa7-4037-a736-19609906c487 00:11:27.142 Node size: 16384 00:11:27.142 Sector size: 4096 (CPU page size: 4096) 00:11:27.142 Filesystem size: 510.00MiB 00:11:27.142 Block group profiles: 00:11:27.142 Data: single 8.00MiB 00:11:27.142 Metadata: DUP 32.00MiB 00:11:27.142 System: DUP 8.00MiB 00:11:27.142 SSD detected: yes 00:11:27.142 Zoned device: no 00:11:27.142 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:27.142 Checksum: crc32c 00:11:27.142 Number of devices: 1 00:11:27.142 Devices: 00:11:27.142 ID SIZE PATH 00:11:27.142 1 510.00MiB /dev/nvme0n1p1 00:11:27.142 00:11:27.142 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:11:27.142 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2875628 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.403 00:11:27.403 real 0m0.518s 00:11:27.403 user 0m0.029s 00:11:27.403 sys 0m0.115s 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.403 ************************************ 00:11:27.403 END TEST filesystem_in_capsule_btrfs 00:11:27.403 ************************************ 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:27.403 04:22:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.403 ************************************ 00:11:27.403 START TEST filesystem_in_capsule_xfs 00:11:27.403 ************************************ 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:11:27.403 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:27.664 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:27.664 = sectsz=512 attr=2, projid32bit=1 00:11:27.664 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:27.664 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:27.664 data = bsize=4096 blocks=130560, imaxpct=25 00:11:27.664 = sunit=0 swidth=0 blks 00:11:27.664 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:27.664 log =internal log bsize=4096 blocks=16384, version=2 00:11:27.664 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:27.664 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:28.235 Discarding blocks...Done. 00:11:28.235 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:11:28.235 04:22:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2875628 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.852 00:11:30.852 real 0m3.355s 00:11:30.852 user 0m0.023s 00:11:30.852 sys 0m0.083s 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:30.852 ************************************ 00:11:30.852 END TEST filesystem_in_capsule_xfs 00:11:30.852 ************************************ 00:11:30.852 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.132 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2875628 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2875628 ']' 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2875628 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2875628 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2875628' 00:11:31.132 killing process with pid 2875628 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 2875628 00:11:31.132 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 2875628 00:11:31.416 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:31.416 00:11:31.416 real 0m20.040s 00:11:31.416 user 1m19.271s 00:11:31.416 sys 0m1.456s 00:11:31.416 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:31.416 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.416 ************************************ 00:11:31.416 END TEST nvmf_filesystem_in_capsule 00:11:31.416 ************************************ 00:11:31.416 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:31.416 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.416 04:22:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:31.416 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.416 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:31.416 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.416 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.416 rmmod nvme_tcp 00:11:31.416 rmmod nvme_fabrics 00:11:31.416 rmmod nvme_keyring 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.679 04:22:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.596 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.596 00:11:33.596 real 0m47.966s 00:11:33.596 user 2m31.529s 00:11:33.596 sys 0m8.670s 00:11:33.596 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.596 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.596 ************************************ 00:11:33.596 END TEST nvmf_filesystem 00:11:33.596 ************************************ 00:11:33.596 04:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:33.596 04:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:33.596 04:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.596 04:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:33.596 ************************************ 00:11:33.596 START TEST nvmf_target_discovery 00:11:33.596 ************************************ 00:11:33.596 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:33.859 * Looking for test storage... 00:11:33.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:33.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.859 --rc genhtml_branch_coverage=1 00:11:33.859 --rc genhtml_function_coverage=1 00:11:33.859 --rc genhtml_legend=1 00:11:33.859 --rc geninfo_all_blocks=1 00:11:33.859 --rc geninfo_unexecuted_blocks=1 00:11:33.859 00:11:33.859 ' 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:33.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.859 --rc genhtml_branch_coverage=1 00:11:33.859 --rc genhtml_function_coverage=1 00:11:33.859 --rc genhtml_legend=1 00:11:33.859 --rc geninfo_all_blocks=1 00:11:33.859 --rc geninfo_unexecuted_blocks=1 00:11:33.859 00:11:33.859 ' 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:33.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.859 --rc genhtml_branch_coverage=1 00:11:33.859 --rc genhtml_function_coverage=1 00:11:33.859 --rc genhtml_legend=1 00:11:33.859 --rc geninfo_all_blocks=1 00:11:33.859 --rc geninfo_unexecuted_blocks=1 00:11:33.859 00:11:33.859 ' 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:33.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.859 --rc genhtml_branch_coverage=1 00:11:33.859 --rc genhtml_function_coverage=1 00:11:33.859 --rc genhtml_legend=1 00:11:33.859 --rc geninfo_all_blocks=1 00:11:33.859 --rc geninfo_unexecuted_blocks=1 00:11:33.859 00:11:33.859 ' 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.859 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.860 04:22:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.007 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:42.008 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:42.008 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:42.008 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:42.008 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.008 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.008 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.605 ms 00:11:42.008 00:11:42.008 --- 10.0.0.2 ping statistics --- 00:11:42.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.008 rtt min/avg/max/mdev = 0.605/0.605/0.605/0.000 ms 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.264 ms 00:11:42.008 00:11:42.008 --- 10.0.0.1 ping statistics --- 00:11:42.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.008 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2884071 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2884071 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 2884071 ']' 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:42.008 04:22:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.009 [2024-11-05 04:22:54.839457] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:42.009 [2024-11-05 04:22:54.839527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.009 [2024-11-05 04:22:54.921900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.009 [2024-11-05 04:22:54.965266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.009 [2024-11-05 04:22:54.965305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.009 [2024-11-05 04:22:54.965313] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.009 [2024-11-05 04:22:54.965320] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.009 [2024-11-05 04:22:54.965325] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.009 [2024-11-05 04:22:54.967047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.009 [2024-11-05 04:22:54.967164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.009 [2024-11-05 04:22:54.967307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.009 [2024-11-05 04:22:54.967308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.270 [2024-11-05 04:22:55.698008] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.270 Null1 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.270 [2024-11-05 04:22:55.758335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.270 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.270 Null2 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 Null3 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 Null4 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.271 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.532 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.532 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.532 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.532 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.532 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.532 04:22:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:11:42.532 00:11:42.532 Discovery Log Number of Records 6, Generation counter 6 00:11:42.532 =====Discovery Log Entry 0====== 00:11:42.532 trtype: tcp 00:11:42.532 adrfam: ipv4 00:11:42.532 subtype: current discovery subsystem 00:11:42.532 treq: not required 00:11:42.532 portid: 0 00:11:42.532 trsvcid: 4420 00:11:42.532 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.532 traddr: 10.0.0.2 00:11:42.532 eflags: explicit discovery connections, duplicate discovery information 00:11:42.532 sectype: none 00:11:42.532 =====Discovery Log Entry 1====== 00:11:42.532 trtype: tcp 00:11:42.532 adrfam: ipv4 00:11:42.532 subtype: nvme subsystem 00:11:42.532 treq: not required 00:11:42.532 portid: 0 00:11:42.532 trsvcid: 4420 00:11:42.532 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:42.532 traddr: 10.0.0.2 00:11:42.532 eflags: none 00:11:42.532 sectype: none 00:11:42.532 =====Discovery Log Entry 2====== 00:11:42.532 trtype: tcp 00:11:42.532 adrfam: ipv4 00:11:42.532 subtype: nvme subsystem 00:11:42.532 treq: not required 00:11:42.532 portid: 0 00:11:42.532 trsvcid: 4420 00:11:42.532 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:42.532 traddr: 10.0.0.2 00:11:42.532 eflags: none 00:11:42.532 sectype: none 00:11:42.532 =====Discovery Log Entry 3====== 00:11:42.532 trtype: tcp 00:11:42.532 adrfam: ipv4 00:11:42.532 subtype: nvme subsystem 00:11:42.532 treq: not required 00:11:42.532 portid: 0 00:11:42.532 trsvcid: 4420 00:11:42.532 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:42.532 traddr: 10.0.0.2 00:11:42.532 eflags: none 00:11:42.532 sectype: none 00:11:42.532 =====Discovery Log Entry 4====== 00:11:42.532 trtype: tcp 00:11:42.533 adrfam: ipv4 00:11:42.533 subtype: nvme subsystem 00:11:42.533 treq: not required 00:11:42.533 portid: 0 00:11:42.533 trsvcid: 4420 00:11:42.533 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:42.533 traddr: 10.0.0.2 00:11:42.533 eflags: none 00:11:42.533 sectype: none 00:11:42.533 =====Discovery Log Entry 5====== 00:11:42.533 trtype: tcp 00:11:42.533 adrfam: ipv4 00:11:42.533 subtype: discovery subsystem referral 00:11:42.533 treq: not required 00:11:42.533 portid: 0 00:11:42.533 trsvcid: 4430 00:11:42.533 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:42.533 traddr: 10.0.0.2 00:11:42.533 eflags: none 00:11:42.533 sectype: none 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:42.533 Perform nvmf subsystem discovery via RPC 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.533 [ 00:11:42.533 { 00:11:42.533 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:42.533 "subtype": "Discovery", 00:11:42.533 "listen_addresses": [ 00:11:42.533 { 00:11:42.533 "trtype": "TCP", 00:11:42.533 "adrfam": "IPv4", 00:11:42.533 "traddr": "10.0.0.2", 00:11:42.533 "trsvcid": "4420" 00:11:42.533 } 00:11:42.533 ], 00:11:42.533 "allow_any_host": true, 00:11:42.533 "hosts": [] 00:11:42.533 }, 00:11:42.533 { 00:11:42.533 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.533 "subtype": "NVMe", 00:11:42.533 "listen_addresses": [ 00:11:42.533 { 00:11:42.533 "trtype": "TCP", 00:11:42.533 "adrfam": "IPv4", 00:11:42.533 "traddr": "10.0.0.2", 00:11:42.533 "trsvcid": "4420" 00:11:42.533 } 00:11:42.533 ], 00:11:42.533 "allow_any_host": true, 00:11:42.533 "hosts": [], 00:11:42.533 "serial_number": "SPDK00000000000001", 00:11:42.533 "model_number": "SPDK bdev Controller", 00:11:42.533 "max_namespaces": 32, 00:11:42.533 "min_cntlid": 1, 00:11:42.533 "max_cntlid": 65519, 00:11:42.533 "namespaces": [ 00:11:42.533 { 00:11:42.533 "nsid": 1, 00:11:42.533 "bdev_name": "Null1", 00:11:42.533 "name": "Null1", 00:11:42.533 "nguid": "E69D6ACA27234148B16EC79F1F7662D7", 00:11:42.533 "uuid": "e69d6aca-2723-4148-b16e-c79f1f7662d7" 00:11:42.533 } 00:11:42.533 ] 00:11:42.533 }, 00:11:42.533 { 00:11:42.533 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:42.533 "subtype": "NVMe", 00:11:42.533 "listen_addresses": [ 00:11:42.533 { 00:11:42.533 "trtype": "TCP", 00:11:42.533 "adrfam": "IPv4", 00:11:42.533 "traddr": "10.0.0.2", 00:11:42.533 "trsvcid": "4420" 00:11:42.533 } 00:11:42.533 ], 00:11:42.533 "allow_any_host": true, 00:11:42.533 "hosts": [], 00:11:42.533 "serial_number": "SPDK00000000000002", 00:11:42.533 "model_number": "SPDK bdev Controller", 00:11:42.533 "max_namespaces": 32, 00:11:42.533 "min_cntlid": 1, 00:11:42.533 "max_cntlid": 65519, 00:11:42.533 "namespaces": [ 00:11:42.533 { 00:11:42.533 "nsid": 1, 00:11:42.533 "bdev_name": "Null2", 00:11:42.533 "name": "Null2", 00:11:42.533 "nguid": "12A9A74470AB433E86BE2086A61E2B4F", 00:11:42.533 "uuid": "12a9a744-70ab-433e-86be-2086a61e2b4f" 00:11:42.533 } 00:11:42.533 ] 00:11:42.533 }, 00:11:42.533 { 00:11:42.533 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:42.533 "subtype": "NVMe", 00:11:42.533 "listen_addresses": [ 00:11:42.533 { 00:11:42.533 "trtype": "TCP", 00:11:42.533 "adrfam": "IPv4", 00:11:42.533 "traddr": "10.0.0.2", 00:11:42.533 "trsvcid": "4420" 00:11:42.533 } 00:11:42.533 ], 00:11:42.533 "allow_any_host": true, 00:11:42.533 "hosts": [], 00:11:42.533 "serial_number": "SPDK00000000000003", 00:11:42.533 "model_number": "SPDK bdev Controller", 00:11:42.533 "max_namespaces": 32, 00:11:42.533 "min_cntlid": 1, 00:11:42.533 "max_cntlid": 65519, 00:11:42.533 "namespaces": [ 00:11:42.533 { 00:11:42.533 "nsid": 1, 00:11:42.533 "bdev_name": "Null3", 00:11:42.533 "name": "Null3", 00:11:42.533 "nguid": "1E69EB15786E4C078D9BC87EB99371A0", 00:11:42.533 "uuid": "1e69eb15-786e-4c07-8d9b-c87eb99371a0" 00:11:42.533 } 00:11:42.533 ] 00:11:42.533 }, 00:11:42.533 { 00:11:42.533 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:42.533 "subtype": "NVMe", 00:11:42.533 "listen_addresses": [ 00:11:42.533 { 00:11:42.533 "trtype": "TCP", 00:11:42.533 "adrfam": "IPv4", 00:11:42.533 "traddr": "10.0.0.2", 00:11:42.533 "trsvcid": "4420" 00:11:42.533 } 00:11:42.533 ], 00:11:42.533 "allow_any_host": true, 00:11:42.533 "hosts": [], 00:11:42.533 "serial_number": "SPDK00000000000004", 00:11:42.533 "model_number": "SPDK bdev Controller", 00:11:42.533 "max_namespaces": 32, 00:11:42.533 "min_cntlid": 1, 00:11:42.533 "max_cntlid": 65519, 00:11:42.533 "namespaces": [ 00:11:42.533 { 00:11:42.533 "nsid": 1, 00:11:42.533 "bdev_name": "Null4", 00:11:42.533 "name": "Null4", 00:11:42.533 "nguid": "F7FEC297B5CD4F8ABB1DA15E9B6A2E29", 00:11:42.533 "uuid": "f7fec297-b5cd-4f8a-bb1d-a15e9b6a2e29" 00:11:42.533 } 00:11:42.533 ] 00:11:42.533 } 00:11:42.533 ] 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.533 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.795 rmmod nvme_tcp 00:11:42.795 rmmod nvme_fabrics 00:11:42.795 rmmod nvme_keyring 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2884071 ']' 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2884071 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 2884071 ']' 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 2884071 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2884071 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2884071' 00:11:42.795 killing process with pid 2884071 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 2884071 00:11:42.795 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 2884071 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.056 04:22:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:45.603 00:11:45.603 real 0m11.390s 00:11:45.603 user 0m8.629s 00:11:45.603 sys 0m5.911s 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:45.603 ************************************ 00:11:45.603 END TEST nvmf_target_discovery 00:11:45.603 ************************************ 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:45.603 ************************************ 00:11:45.603 START TEST nvmf_referrals 00:11:45.603 ************************************ 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:45.603 * Looking for test storage... 00:11:45.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:45.603 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:45.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.604 --rc genhtml_branch_coverage=1 00:11:45.604 --rc genhtml_function_coverage=1 00:11:45.604 --rc genhtml_legend=1 00:11:45.604 --rc geninfo_all_blocks=1 00:11:45.604 --rc geninfo_unexecuted_blocks=1 00:11:45.604 00:11:45.604 ' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:45.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.604 --rc genhtml_branch_coverage=1 00:11:45.604 --rc genhtml_function_coverage=1 00:11:45.604 --rc genhtml_legend=1 00:11:45.604 --rc geninfo_all_blocks=1 00:11:45.604 --rc geninfo_unexecuted_blocks=1 00:11:45.604 00:11:45.604 ' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:45.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.604 --rc genhtml_branch_coverage=1 00:11:45.604 --rc genhtml_function_coverage=1 00:11:45.604 --rc genhtml_legend=1 00:11:45.604 --rc geninfo_all_blocks=1 00:11:45.604 --rc geninfo_unexecuted_blocks=1 00:11:45.604 00:11:45.604 ' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:45.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.604 --rc genhtml_branch_coverage=1 00:11:45.604 --rc genhtml_function_coverage=1 00:11:45.604 --rc genhtml_legend=1 00:11:45.604 --rc geninfo_all_blocks=1 00:11:45.604 --rc geninfo_unexecuted_blocks=1 00:11:45.604 00:11:45.604 ' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:45.604 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:45.604 04:22:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.195 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:52.456 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:52.456 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.456 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:52.457 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:52.457 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.457 04:23:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.457 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.457 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.457 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:52.457 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:52.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.615 ms 00:11:52.719 00:11:52.719 --- 10.0.0.2 ping statistics --- 00:11:52.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.719 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:11:52.719 00:11:52.719 --- 10.0.0.1 ping statistics --- 00:11:52.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.719 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2888537 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2888537 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 2888537 ']' 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:52.719 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.719 [2024-11-05 04:23:06.251972] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:11:52.719 [2024-11-05 04:23:06.252021] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.719 [2024-11-05 04:23:06.332609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:52.981 [2024-11-05 04:23:06.373194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.981 [2024-11-05 04:23:06.373233] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.981 [2024-11-05 04:23:06.373243] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.981 [2024-11-05 04:23:06.373251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.981 [2024-11-05 04:23:06.373258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.981 [2024-11-05 04:23:06.375216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.981 [2024-11-05 04:23:06.375332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.981 [2024-11-05 04:23:06.375490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:52.982 [2024-11-05 04:23:06.375490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.982 [2024-11-05 04:23:06.503694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.982 [2024-11-05 04:23:06.519913] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.982 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.243 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.244 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:53.244 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.244 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.244 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.244 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:53.244 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:53.244 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.244 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.244 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.505 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:53.505 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:53.505 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:53.505 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:53.505 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.505 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:53.505 04:23:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:53.505 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:53.506 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:53.506 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.506 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:53.506 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:53.506 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.766 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:53.766 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:53.766 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:53.766 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:53.767 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:54.028 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:54.028 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:54.028 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:54.028 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:54.028 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.028 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.289 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:54.550 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:54.550 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:54.550 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:54.550 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:54.550 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:54.550 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.550 04:23:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:54.550 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:54.550 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:54.550 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:54.550 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:54.550 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.550 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:54.812 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:55.074 rmmod nvme_tcp 00:11:55.074 rmmod nvme_fabrics 00:11:55.074 rmmod nvme_keyring 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2888537 ']' 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2888537 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 2888537 ']' 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 2888537 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2888537 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2888537' 00:11:55.074 killing process with pid 2888537 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 2888537 00:11:55.074 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 2888537 00:11:55.335 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:55.335 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:55.335 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:55.336 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:55.336 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:55.336 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:55.336 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:55.336 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:55.336 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:55.336 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.336 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.336 04:23:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.249 04:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:57.249 00:11:57.249 real 0m12.185s 00:11:57.249 user 0m12.587s 00:11:57.249 sys 0m6.253s 00:11:57.249 04:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:57.249 04:23:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.249 ************************************ 00:11:57.249 END TEST nvmf_referrals 00:11:57.249 ************************************ 00:11:57.510 04:23:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:57.510 04:23:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:57.510 04:23:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:57.510 04:23:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:57.510 ************************************ 00:11:57.510 START TEST nvmf_connect_disconnect 00:11:57.510 ************************************ 00:11:57.510 04:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:57.510 * Looking for test storage... 00:11:57.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:57.510 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:57.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.511 --rc genhtml_branch_coverage=1 00:11:57.511 --rc genhtml_function_coverage=1 00:11:57.511 --rc genhtml_legend=1 00:11:57.511 --rc geninfo_all_blocks=1 00:11:57.511 --rc geninfo_unexecuted_blocks=1 00:11:57.511 00:11:57.511 ' 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:57.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.511 --rc genhtml_branch_coverage=1 00:11:57.511 --rc genhtml_function_coverage=1 00:11:57.511 --rc genhtml_legend=1 00:11:57.511 --rc geninfo_all_blocks=1 00:11:57.511 --rc geninfo_unexecuted_blocks=1 00:11:57.511 00:11:57.511 ' 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:57.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.511 --rc genhtml_branch_coverage=1 00:11:57.511 --rc genhtml_function_coverage=1 00:11:57.511 --rc genhtml_legend=1 00:11:57.511 --rc geninfo_all_blocks=1 00:11:57.511 --rc geninfo_unexecuted_blocks=1 00:11:57.511 00:11:57.511 ' 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:57.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:57.511 --rc genhtml_branch_coverage=1 00:11:57.511 --rc genhtml_function_coverage=1 00:11:57.511 --rc genhtml_legend=1 00:11:57.511 --rc geninfo_all_blocks=1 00:11:57.511 --rc geninfo_unexecuted_blocks=1 00:11:57.511 00:11:57.511 ' 00:11:57.511 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:57.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:57.773 04:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.919 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:05.920 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:05.920 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:05.920 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:05.920 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:12:05.920 00:12:05.920 --- 10.0.0.2 ping statistics --- 00:12:05.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.920 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:12:05.920 00:12:05.920 --- 10.0.0.1 ping statistics --- 00:12:05.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.920 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.920 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2893309 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2893309 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 2893309 ']' 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:05.921 04:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.921 [2024-11-05 04:23:18.604391] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:12:05.921 [2024-11-05 04:23:18.604459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.921 [2024-11-05 04:23:18.686927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.921 [2024-11-05 04:23:18.728581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.921 [2024-11-05 04:23:18.728618] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.921 [2024-11-05 04:23:18.728626] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.921 [2024-11-05 04:23:18.728633] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.921 [2024-11-05 04:23:18.728638] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.921 [2024-11-05 04:23:18.730478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.921 [2024-11-05 04:23:18.730602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.921 [2024-11-05 04:23:18.730778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.921 [2024-11-05 04:23:18.730779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.921 [2024-11-05 04:23:19.457686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:05.921 [2024-11-05 04:23:19.526076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:05.921 04:23:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:10.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.720 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:24.327 rmmod nvme_tcp 00:12:24.327 rmmod nvme_fabrics 00:12:24.327 rmmod nvme_keyring 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2893309 ']' 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2893309 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2893309 ']' 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 2893309 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2893309 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2893309' 00:12:24.327 killing process with pid 2893309 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 2893309 00:12:24.327 04:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 2893309 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:24.588 04:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:26.502 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:26.502 00:12:26.502 real 0m29.168s 00:12:26.502 user 1m18.916s 00:12:26.502 sys 0m7.011s 00:12:26.502 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:26.502 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.502 ************************************ 00:12:26.502 END TEST nvmf_connect_disconnect 00:12:26.502 ************************************ 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:26.764 ************************************ 00:12:26.764 START TEST nvmf_multitarget 00:12:26.764 ************************************ 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:26.764 * Looking for test storage... 00:12:26.764 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:26.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.764 --rc genhtml_branch_coverage=1 00:12:26.764 --rc genhtml_function_coverage=1 00:12:26.764 --rc genhtml_legend=1 00:12:26.764 --rc geninfo_all_blocks=1 00:12:26.764 --rc geninfo_unexecuted_blocks=1 00:12:26.764 00:12:26.764 ' 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:26.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.764 --rc genhtml_branch_coverage=1 00:12:26.764 --rc genhtml_function_coverage=1 00:12:26.764 --rc genhtml_legend=1 00:12:26.764 --rc geninfo_all_blocks=1 00:12:26.764 --rc geninfo_unexecuted_blocks=1 00:12:26.764 00:12:26.764 ' 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:26.764 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.764 --rc genhtml_branch_coverage=1 00:12:26.764 --rc genhtml_function_coverage=1 00:12:26.764 --rc genhtml_legend=1 00:12:26.764 --rc geninfo_all_blocks=1 00:12:26.764 --rc geninfo_unexecuted_blocks=1 00:12:26.764 00:12:26.764 ' 00:12:26.764 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:26.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:26.765 --rc genhtml_branch_coverage=1 00:12:26.765 --rc genhtml_function_coverage=1 00:12:26.765 --rc genhtml_legend=1 00:12:26.765 --rc geninfo_all_blocks=1 00:12:26.765 --rc geninfo_unexecuted_blocks=1 00:12:26.765 00:12:26.765 ' 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:26.765 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:27.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:27.027 04:23:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:35.176 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:35.176 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:35.176 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:35.176 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:35.176 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:35.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:35.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.569 ms 00:12:35.177 00:12:35.177 --- 10.0.0.2 ping statistics --- 00:12:35.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.177 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:35.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:35.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:12:35.177 00:12:35.177 --- 10.0.0.1 ping statistics --- 00:12:35.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:35.177 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2901429 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2901429 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 2901429 ']' 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:35.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:35.177 04:23:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 [2024-11-05 04:23:47.915550] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:12:35.177 [2024-11-05 04:23:47.915602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:35.177 [2024-11-05 04:23:47.993222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:35.177 [2024-11-05 04:23:48.028852] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:35.177 [2024-11-05 04:23:48.028889] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:35.177 [2024-11-05 04:23:48.028898] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:35.177 [2024-11-05 04:23:48.028905] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:35.177 [2024-11-05 04:23:48.028911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:35.177 [2024-11-05 04:23:48.030447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:35.177 [2024-11-05 04:23:48.030562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:35.177 [2024-11-05 04:23:48.030715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:35.177 [2024-11-05 04:23:48.030716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:35.177 "nvmf_tgt_1" 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:35.177 "nvmf_tgt_2" 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:35.177 true 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:35.177 true 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:35.177 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:35.439 rmmod nvme_tcp 00:12:35.439 rmmod nvme_fabrics 00:12:35.439 rmmod nvme_keyring 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2901429 ']' 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2901429 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 2901429 ']' 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 2901429 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:35.439 04:23:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2901429 00:12:35.439 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:35.439 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:35.439 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2901429' 00:12:35.439 killing process with pid 2901429 00:12:35.439 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 2901429 00:12:35.439 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 2901429 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:35.700 04:23:49 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.614 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:37.614 00:12:37.614 real 0m11.051s 00:12:37.614 user 0m7.348s 00:12:37.614 sys 0m6.034s 00:12:37.614 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:37.614 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:37.614 ************************************ 00:12:37.614 END TEST nvmf_multitarget 00:12:37.614 ************************************ 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.874 ************************************ 00:12:37.874 START TEST nvmf_rpc 00:12:37.874 ************************************ 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:37.874 * Looking for test storage... 00:12:37.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.874 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:37.875 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:38.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.137 --rc genhtml_branch_coverage=1 00:12:38.137 --rc genhtml_function_coverage=1 00:12:38.137 --rc genhtml_legend=1 00:12:38.137 --rc geninfo_all_blocks=1 00:12:38.137 --rc geninfo_unexecuted_blocks=1 00:12:38.137 00:12:38.137 ' 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:38.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.137 --rc genhtml_branch_coverage=1 00:12:38.137 --rc genhtml_function_coverage=1 00:12:38.137 --rc genhtml_legend=1 00:12:38.137 --rc geninfo_all_blocks=1 00:12:38.137 --rc geninfo_unexecuted_blocks=1 00:12:38.137 00:12:38.137 ' 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:38.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.137 --rc genhtml_branch_coverage=1 00:12:38.137 --rc genhtml_function_coverage=1 00:12:38.137 --rc genhtml_legend=1 00:12:38.137 --rc geninfo_all_blocks=1 00:12:38.137 --rc geninfo_unexecuted_blocks=1 00:12:38.137 00:12:38.137 ' 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:38.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.137 --rc genhtml_branch_coverage=1 00:12:38.137 --rc genhtml_function_coverage=1 00:12:38.137 --rc genhtml_legend=1 00:12:38.137 --rc geninfo_all_blocks=1 00:12:38.137 --rc geninfo_unexecuted_blocks=1 00:12:38.137 00:12:38.137 ' 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:38.137 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:38.137 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:38.138 04:23:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:46.911 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:46.911 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:46.911 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:46.911 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:46.911 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:46.911 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:12:46.911 00:12:46.911 --- 10.0.0.2 ping statistics --- 00:12:46.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.911 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:46.911 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:46.911 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:12:46.911 00:12:46.911 --- 10.0.0.1 ping statistics --- 00:12:46.911 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:46.911 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:46.911 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2905803 00:12:46.912 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2905803 00:12:46.912 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:46.912 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 2905803 ']' 00:12:46.912 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.912 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:46.912 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.912 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:46.912 04:23:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 [2024-11-05 04:23:58.855785] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:12:46.912 [2024-11-05 04:23:58.855850] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:46.912 [2024-11-05 04:23:58.937738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:46.912 [2024-11-05 04:23:58.979499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:46.912 [2024-11-05 04:23:58.979536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:46.912 [2024-11-05 04:23:58.979544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:46.912 [2024-11-05 04:23:58.979551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:46.912 [2024-11-05 04:23:58.979557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:46.912 [2024-11-05 04:23:58.981391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:46.912 [2024-11-05 04:23:58.981507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:46.912 [2024-11-05 04:23:58.981662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.912 [2024-11-05 04:23:58.981664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:46.912 "tick_rate": 2400000000, 00:12:46.912 "poll_groups": [ 00:12:46.912 { 00:12:46.912 "name": "nvmf_tgt_poll_group_000", 00:12:46.912 "admin_qpairs": 0, 00:12:46.912 "io_qpairs": 0, 00:12:46.912 "current_admin_qpairs": 0, 00:12:46.912 "current_io_qpairs": 0, 00:12:46.912 "pending_bdev_io": 0, 00:12:46.912 "completed_nvme_io": 0, 00:12:46.912 "transports": [] 00:12:46.912 }, 00:12:46.912 { 00:12:46.912 "name": "nvmf_tgt_poll_group_001", 00:12:46.912 "admin_qpairs": 0, 00:12:46.912 "io_qpairs": 0, 00:12:46.912 "current_admin_qpairs": 0, 00:12:46.912 "current_io_qpairs": 0, 00:12:46.912 "pending_bdev_io": 0, 00:12:46.912 "completed_nvme_io": 0, 00:12:46.912 "transports": [] 00:12:46.912 }, 00:12:46.912 { 00:12:46.912 "name": "nvmf_tgt_poll_group_002", 00:12:46.912 "admin_qpairs": 0, 00:12:46.912 "io_qpairs": 0, 00:12:46.912 "current_admin_qpairs": 0, 00:12:46.912 "current_io_qpairs": 0, 00:12:46.912 "pending_bdev_io": 0, 00:12:46.912 "completed_nvme_io": 0, 00:12:46.912 "transports": [] 00:12:46.912 }, 00:12:46.912 { 00:12:46.912 "name": "nvmf_tgt_poll_group_003", 00:12:46.912 "admin_qpairs": 0, 00:12:46.912 "io_qpairs": 0, 00:12:46.912 "current_admin_qpairs": 0, 00:12:46.912 "current_io_qpairs": 0, 00:12:46.912 "pending_bdev_io": 0, 00:12:46.912 "completed_nvme_io": 0, 00:12:46.912 "transports": [] 00:12:46.912 } 00:12:46.912 ] 00:12:46.912 }' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 [2024-11-05 04:23:59.828869] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:46.912 "tick_rate": 2400000000, 00:12:46.912 "poll_groups": [ 00:12:46.912 { 00:12:46.912 "name": "nvmf_tgt_poll_group_000", 00:12:46.912 "admin_qpairs": 0, 00:12:46.912 "io_qpairs": 0, 00:12:46.912 "current_admin_qpairs": 0, 00:12:46.912 "current_io_qpairs": 0, 00:12:46.912 "pending_bdev_io": 0, 00:12:46.912 "completed_nvme_io": 0, 00:12:46.912 "transports": [ 00:12:46.912 { 00:12:46.912 "trtype": "TCP" 00:12:46.912 } 00:12:46.912 ] 00:12:46.912 }, 00:12:46.912 { 00:12:46.912 "name": "nvmf_tgt_poll_group_001", 00:12:46.912 "admin_qpairs": 0, 00:12:46.912 "io_qpairs": 0, 00:12:46.912 "current_admin_qpairs": 0, 00:12:46.912 "current_io_qpairs": 0, 00:12:46.912 "pending_bdev_io": 0, 00:12:46.912 "completed_nvme_io": 0, 00:12:46.912 "transports": [ 00:12:46.912 { 00:12:46.912 "trtype": "TCP" 00:12:46.912 } 00:12:46.912 ] 00:12:46.912 }, 00:12:46.912 { 00:12:46.912 "name": "nvmf_tgt_poll_group_002", 00:12:46.912 "admin_qpairs": 0, 00:12:46.912 "io_qpairs": 0, 00:12:46.912 "current_admin_qpairs": 0, 00:12:46.912 "current_io_qpairs": 0, 00:12:46.912 "pending_bdev_io": 0, 00:12:46.912 "completed_nvme_io": 0, 00:12:46.912 "transports": [ 00:12:46.912 { 00:12:46.912 "trtype": "TCP" 00:12:46.912 } 00:12:46.912 ] 00:12:46.912 }, 00:12:46.912 { 00:12:46.912 "name": "nvmf_tgt_poll_group_003", 00:12:46.912 "admin_qpairs": 0, 00:12:46.912 "io_qpairs": 0, 00:12:46.912 "current_admin_qpairs": 0, 00:12:46.912 "current_io_qpairs": 0, 00:12:46.912 "pending_bdev_io": 0, 00:12:46.912 "completed_nvme_io": 0, 00:12:46.912 "transports": [ 00:12:46.912 { 00:12:46.912 "trtype": "TCP" 00:12:46.912 } 00:12:46.912 ] 00:12:46.912 } 00:12:46.912 ] 00:12:46.912 }' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 Malloc1 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.912 04:23:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 [2024-11-05 04:24:00.029027] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:12:46.912 [2024-11-05 04:24:00.065860] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:46.912 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:46.912 could not add new controller: failed to write to nvme-fabrics device 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.912 04:24:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.295 04:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.295 04:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:48.295 04:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.295 04:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:48.295 04:24:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:50.207 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.207 [2024-11-05 04:24:03.823509] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:12:50.468 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:50.468 could not add new controller: failed to write to nvme-fabrics device 00:12:50.468 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:50.468 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:50.468 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:50.468 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:50.468 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:50.468 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.468 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.468 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.468 04:24:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.851 04:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.851 04:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:51.851 04:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.851 04:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:51.851 04:24:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.394 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.394 [2024-11-05 04:24:07.578076] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.394 04:24:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.779 04:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.779 04:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:55.779 04:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.779 04:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:55.779 04:24:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:57.692 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.692 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.953 [2024-11-05 04:24:11.346886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.953 04:24:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:59.338 04:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:59.338 04:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:12:59.338 04:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:59.338 04:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:59.338 04:24:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:01.253 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:01.254 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:01.254 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.254 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:01.254 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.254 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:01.254 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.515 04:24:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 [2024-11-05 04:24:15.032702] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.515 04:24:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.430 04:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.430 04:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:03.430 04:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.430 04:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:03.430 04:24:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:05.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.344 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.345 [2024-11-05 04:24:18.782166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.345 04:24:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.732 04:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:06.732 04:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:06.732 04:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:06.732 04:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:06.732 04:24:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:08.745 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:08.745 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:08.745 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:08.745 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:08.745 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:08.745 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:08.745 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.007 [2024-11-05 04:24:22.501869] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.007 04:24:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.922 04:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.922 04:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:13:10.922 04:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.922 04:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:10.922 04:24:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 [2024-11-05 04:24:26.263629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 [2024-11-05 04:24:26.331805] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.837 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 [2024-11-05 04:24:26.400007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.838 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.838 [2024-11-05 04:24:26.472244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 [2024-11-05 04:24:26.540489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.099 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:13.099 "tick_rate": 2400000000, 00:13:13.099 "poll_groups": [ 00:13:13.099 { 00:13:13.099 "name": "nvmf_tgt_poll_group_000", 00:13:13.099 "admin_qpairs": 0, 00:13:13.099 "io_qpairs": 224, 00:13:13.099 "current_admin_qpairs": 0, 00:13:13.099 "current_io_qpairs": 0, 00:13:13.099 "pending_bdev_io": 0, 00:13:13.099 "completed_nvme_io": 518, 00:13:13.099 "transports": [ 00:13:13.099 { 00:13:13.099 "trtype": "TCP" 00:13:13.099 } 00:13:13.099 ] 00:13:13.099 }, 00:13:13.099 { 00:13:13.099 "name": "nvmf_tgt_poll_group_001", 00:13:13.099 "admin_qpairs": 1, 00:13:13.099 "io_qpairs": 223, 00:13:13.099 "current_admin_qpairs": 0, 00:13:13.099 "current_io_qpairs": 0, 00:13:13.099 "pending_bdev_io": 0, 00:13:13.099 "completed_nvme_io": 224, 00:13:13.099 "transports": [ 00:13:13.099 { 00:13:13.099 "trtype": "TCP" 00:13:13.099 } 00:13:13.099 ] 00:13:13.099 }, 00:13:13.099 { 00:13:13.099 "name": "nvmf_tgt_poll_group_002", 00:13:13.099 "admin_qpairs": 6, 00:13:13.099 "io_qpairs": 218, 00:13:13.099 "current_admin_qpairs": 0, 00:13:13.099 "current_io_qpairs": 0, 00:13:13.099 "pending_bdev_io": 0, 00:13:13.099 "completed_nvme_io": 222, 00:13:13.099 "transports": [ 00:13:13.099 { 00:13:13.099 "trtype": "TCP" 00:13:13.099 } 00:13:13.099 ] 00:13:13.099 }, 00:13:13.099 { 00:13:13.099 "name": "nvmf_tgt_poll_group_003", 00:13:13.099 "admin_qpairs": 0, 00:13:13.099 "io_qpairs": 224, 00:13:13.099 "current_admin_qpairs": 0, 00:13:13.099 "current_io_qpairs": 0, 00:13:13.100 "pending_bdev_io": 0, 00:13:13.100 "completed_nvme_io": 275, 00:13:13.100 "transports": [ 00:13:13.100 { 00:13:13.100 "trtype": "TCP" 00:13:13.100 } 00:13:13.100 ] 00:13:13.100 } 00:13:13.100 ] 00:13:13.100 }' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.100 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.100 rmmod nvme_tcp 00:13:13.361 rmmod nvme_fabrics 00:13:13.361 rmmod nvme_keyring 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2905803 ']' 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2905803 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 2905803 ']' 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 2905803 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2905803 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2905803' 00:13:13.361 killing process with pid 2905803 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 2905803 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 2905803 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.361 04:24:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.910 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:15.910 00:13:15.910 real 0m37.741s 00:13:15.910 user 1m53.792s 00:13:15.910 sys 0m7.688s 00:13:15.910 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:15.910 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.910 ************************************ 00:13:15.910 END TEST nvmf_rpc 00:13:15.910 ************************************ 00:13:15.910 04:24:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.911 ************************************ 00:13:15.911 START TEST nvmf_invalid 00:13:15.911 ************************************ 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:15.911 * Looking for test storage... 00:13:15.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.911 --rc genhtml_branch_coverage=1 00:13:15.911 --rc genhtml_function_coverage=1 00:13:15.911 --rc genhtml_legend=1 00:13:15.911 --rc geninfo_all_blocks=1 00:13:15.911 --rc geninfo_unexecuted_blocks=1 00:13:15.911 00:13:15.911 ' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.911 --rc genhtml_branch_coverage=1 00:13:15.911 --rc genhtml_function_coverage=1 00:13:15.911 --rc genhtml_legend=1 00:13:15.911 --rc geninfo_all_blocks=1 00:13:15.911 --rc geninfo_unexecuted_blocks=1 00:13:15.911 00:13:15.911 ' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.911 --rc genhtml_branch_coverage=1 00:13:15.911 --rc genhtml_function_coverage=1 00:13:15.911 --rc genhtml_legend=1 00:13:15.911 --rc geninfo_all_blocks=1 00:13:15.911 --rc geninfo_unexecuted_blocks=1 00:13:15.911 00:13:15.911 ' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:15.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.911 --rc genhtml_branch_coverage=1 00:13:15.911 --rc genhtml_function_coverage=1 00:13:15.911 --rc genhtml_legend=1 00:13:15.911 --rc geninfo_all_blocks=1 00:13:15.911 --rc geninfo_unexecuted_blocks=1 00:13:15.911 00:13:15.911 ' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.911 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:15.912 04:24:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:22.504 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.504 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:22.504 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:22.504 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:22.504 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:22.504 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:22.504 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:22.504 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:22.504 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:22.505 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:22.767 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:22.767 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:22.767 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:22.767 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.767 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:23.028 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:23.028 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:23.028 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:23.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:23.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.653 ms 00:13:23.028 00:13:23.028 --- 10.0.0.2 ping statistics --- 00:13:23.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.028 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:13:23.028 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:23.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:23.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:13:23.028 00:13:23.028 --- 10.0.0.1 ping statistics --- 00:13:23.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:23.028 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:13:23.028 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2916198 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2916198 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 2916198 ']' 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:23.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:23.029 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:23.029 [2024-11-05 04:24:36.504632] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:13:23.029 [2024-11-05 04:24:36.504683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:23.029 [2024-11-05 04:24:36.575654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:23.029 [2024-11-05 04:24:36.613718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:23.029 [2024-11-05 04:24:36.613757] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:23.029 [2024-11-05 04:24:36.613765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:23.029 [2024-11-05 04:24:36.613772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:23.029 [2024-11-05 04:24:36.613778] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:23.029 [2024-11-05 04:24:36.615515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:23.029 [2024-11-05 04:24:36.615629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:23.029 [2024-11-05 04:24:36.615797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.029 [2024-11-05 04:24:36.615797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:23.289 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:23.289 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:13:23.290 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:23.290 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:23.290 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:23.290 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.290 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:23.290 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode5480 00:13:23.290 [2024-11-05 04:24:36.899945] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:23.551 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:23.551 { 00:13:23.551 "nqn": "nqn.2016-06.io.spdk:cnode5480", 00:13:23.551 "tgt_name": "foobar", 00:13:23.551 "method": "nvmf_create_subsystem", 00:13:23.551 "req_id": 1 00:13:23.551 } 00:13:23.551 Got JSON-RPC error response 00:13:23.551 response: 00:13:23.551 { 00:13:23.551 "code": -32603, 00:13:23.551 "message": "Unable to find target foobar" 00:13:23.551 }' 00:13:23.551 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:23.551 { 00:13:23.551 "nqn": "nqn.2016-06.io.spdk:cnode5480", 00:13:23.551 "tgt_name": "foobar", 00:13:23.551 "method": "nvmf_create_subsystem", 00:13:23.551 "req_id": 1 00:13:23.551 } 00:13:23.551 Got JSON-RPC error response 00:13:23.551 response: 00:13:23.551 { 00:13:23.551 "code": -32603, 00:13:23.551 "message": "Unable to find target foobar" 00:13:23.551 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:23.551 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:23.551 04:24:36 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5775 00:13:23.551 [2024-11-05 04:24:37.092591] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5775: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:23.551 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:23.551 { 00:13:23.551 "nqn": "nqn.2016-06.io.spdk:cnode5775", 00:13:23.551 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:23.551 "method": "nvmf_create_subsystem", 00:13:23.551 "req_id": 1 00:13:23.551 } 00:13:23.551 Got JSON-RPC error response 00:13:23.551 response: 00:13:23.551 { 00:13:23.551 "code": -32602, 00:13:23.551 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:23.551 }' 00:13:23.551 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:23.551 { 00:13:23.551 "nqn": "nqn.2016-06.io.spdk:cnode5775", 00:13:23.551 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:23.551 "method": "nvmf_create_subsystem", 00:13:23.551 "req_id": 1 00:13:23.551 } 00:13:23.551 Got JSON-RPC error response 00:13:23.551 response: 00:13:23.551 { 00:13:23.551 "code": -32602, 00:13:23.551 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:23.551 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:23.551 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:23.551 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7445 00:13:23.812 [2024-11-05 04:24:37.281189] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7445: invalid model number 'SPDK_Controller' 00:13:23.812 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:23.812 { 00:13:23.812 "nqn": "nqn.2016-06.io.spdk:cnode7445", 00:13:23.812 "model_number": "SPDK_Controller\u001f", 00:13:23.812 "method": "nvmf_create_subsystem", 00:13:23.812 "req_id": 1 00:13:23.812 } 00:13:23.812 Got JSON-RPC error response 00:13:23.812 response: 00:13:23.812 { 00:13:23.812 "code": -32602, 00:13:23.812 "message": "Invalid MN SPDK_Controller\u001f" 00:13:23.812 }' 00:13:23.812 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:23.812 { 00:13:23.813 "nqn": "nqn.2016-06.io.spdk:cnode7445", 00:13:23.813 "model_number": "SPDK_Controller\u001f", 00:13:23.813 "method": "nvmf_create_subsystem", 00:13:23.813 "req_id": 1 00:13:23.813 } 00:13:23.813 Got JSON-RPC error response 00:13:23.813 response: 00:13:23.813 { 00:13:23.813 "code": -32602, 00:13:23.813 "message": "Invalid MN SPDK_Controller\u001f" 00:13:23.813 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.813 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '>PAp16V8Pp'\''e*fqLkC#@K' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '>PAp16V8Pp'\''e*fqLkC#@K' nqn.2016-06.io.spdk:cnode31 00:13:24.075 [2024-11-05 04:24:37.634378] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31: invalid serial number '>PAp16V8Pp'e*fqLkC#@K' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:24.075 { 00:13:24.075 "nqn": "nqn.2016-06.io.spdk:cnode31", 00:13:24.075 "serial_number": ">PAp16V8Pp'\''e*fqLkC#@K", 00:13:24.075 "method": "nvmf_create_subsystem", 00:13:24.075 "req_id": 1 00:13:24.075 } 00:13:24.075 Got JSON-RPC error response 00:13:24.075 response: 00:13:24.075 { 00:13:24.075 "code": -32602, 00:13:24.075 "message": "Invalid SN >PAp16V8Pp'\''e*fqLkC#@K" 00:13:24.075 }' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:24.075 { 00:13:24.075 "nqn": "nqn.2016-06.io.spdk:cnode31", 00:13:24.075 "serial_number": ">PAp16V8Pp'e*fqLkC#@K", 00:13:24.075 "method": "nvmf_create_subsystem", 00:13:24.075 "req_id": 1 00:13:24.075 } 00:13:24.075 Got JSON-RPC error response 00:13:24.075 response: 00:13:24.075 { 00:13:24.075 "code": -32602, 00:13:24.075 "message": "Invalid SN >PAp16V8Pp'e*fqLkC#@K" 00:13:24.075 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.075 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.338 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.339 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ a == \- ]] 00:13:24.601 04:24:37 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'a6a|&+b0$:!T&#B.P@?b.p!m4`;@;b)ibT- /dev/null' 00:13:26.441 04:24:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.992 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:28.992 00:13:28.992 real 0m12.960s 00:13:28.992 user 0m17.800s 00:13:28.992 sys 0m6.378s 00:13:28.992 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:28.992 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:28.992 ************************************ 00:13:28.992 END TEST nvmf_invalid 00:13:28.992 ************************************ 00:13:28.992 04:24:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:28.992 04:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:28.992 04:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:28.992 04:24:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.992 ************************************ 00:13:28.992 START TEST nvmf_connect_stress 00:13:28.992 ************************************ 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:28.993 * Looking for test storage... 00:13:28.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:28.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.993 --rc genhtml_branch_coverage=1 00:13:28.993 --rc genhtml_function_coverage=1 00:13:28.993 --rc genhtml_legend=1 00:13:28.993 --rc geninfo_all_blocks=1 00:13:28.993 --rc geninfo_unexecuted_blocks=1 00:13:28.993 00:13:28.993 ' 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:28.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.993 --rc genhtml_branch_coverage=1 00:13:28.993 --rc genhtml_function_coverage=1 00:13:28.993 --rc genhtml_legend=1 00:13:28.993 --rc geninfo_all_blocks=1 00:13:28.993 --rc geninfo_unexecuted_blocks=1 00:13:28.993 00:13:28.993 ' 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:28.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.993 --rc genhtml_branch_coverage=1 00:13:28.993 --rc genhtml_function_coverage=1 00:13:28.993 --rc genhtml_legend=1 00:13:28.993 --rc geninfo_all_blocks=1 00:13:28.993 --rc geninfo_unexecuted_blocks=1 00:13:28.993 00:13:28.993 ' 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:28.993 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:28.993 --rc genhtml_branch_coverage=1 00:13:28.993 --rc genhtml_function_coverage=1 00:13:28.993 --rc genhtml_legend=1 00:13:28.993 --rc geninfo_all_blocks=1 00:13:28.993 --rc geninfo_unexecuted_blocks=1 00:13:28.993 00:13:28.993 ' 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.993 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:28.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:28.994 04:24:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:37.144 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:37.144 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.144 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:37.145 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:37.145 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:37.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:37.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:13:37.145 00:13:37.145 --- 10.0.0.2 ping statistics --- 00:13:37.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.145 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:37.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:37.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:13:37.145 00:13:37.145 --- 10.0.0.1 ping statistics --- 00:13:37.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:37.145 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2921080 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2921080 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 2921080 ']' 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:37.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:37.145 04:24:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.145 [2024-11-05 04:24:49.734124] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:13:37.145 [2024-11-05 04:24:49.734208] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:37.145 [2024-11-05 04:24:49.833479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:37.145 [2024-11-05 04:24:49.885417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:37.145 [2024-11-05 04:24:49.885471] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:37.145 [2024-11-05 04:24:49.885481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:37.145 [2024-11-05 04:24:49.885491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:37.145 [2024-11-05 04:24:49.885498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:37.145 [2024-11-05 04:24:49.887542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:37.145 [2024-11-05 04:24:49.887714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:37.145 [2024-11-05 04:24:49.887715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.145 [2024-11-05 04:24:50.582948] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.145 [2024-11-05 04:24:50.607377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.145 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.145 NULL1 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2921427 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.146 04:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.718 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.718 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:37.718 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.718 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.718 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.979 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.979 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:37.979 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.979 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.979 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.240 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.240 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:38.240 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.240 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.240 04:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.501 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.501 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:38.501 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.501 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.501 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.762 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.762 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:38.762 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.762 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.762 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.335 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.335 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:39.335 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.335 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.335 04:24:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.596 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.596 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:39.596 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.596 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.596 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.857 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.857 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:39.857 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.857 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.857 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.118 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.118 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:40.118 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.118 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.118 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.379 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.379 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:40.379 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.379 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.379 04:24:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.952 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.952 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:40.952 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.952 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.952 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.213 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.213 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:41.213 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.213 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.213 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.476 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.476 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:41.476 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.476 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.476 04:24:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.738 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.738 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:41.738 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.738 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.738 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.999 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.999 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:41.999 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.999 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.999 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.570 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.570 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:42.570 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.570 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.570 04:24:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.831 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.831 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:42.831 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.832 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.832 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.093 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.093 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:43.093 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.093 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.093 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.354 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.354 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:43.354 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.354 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.354 04:24:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:43.615 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.615 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:43.615 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.615 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.615 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.187 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.187 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:44.187 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.187 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.187 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.448 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.448 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:44.448 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.448 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.448 04:24:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.710 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.710 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:44.710 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.710 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.710 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:44.971 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.971 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:44.971 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.971 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.971 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.233 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.233 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:45.233 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.233 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.233 04:24:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.806 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.806 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:45.806 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.806 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.806 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.067 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.067 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:46.067 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.067 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.067 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.328 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:46.328 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.328 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.328 04:24:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.590 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.590 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:46.590 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.590 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.590 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.163 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.163 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:47.163 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.163 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.163 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.163 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:47.424 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.424 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2921427 00:13:47.424 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2921427) - No such process 00:13:47.424 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2921427 00:13:47.424 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:47.424 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:47.424 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:47.425 rmmod nvme_tcp 00:13:47.425 rmmod nvme_fabrics 00:13:47.425 rmmod nvme_keyring 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2921080 ']' 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2921080 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 2921080 ']' 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 2921080 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2921080 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2921080' 00:13:47.425 killing process with pid 2921080 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 2921080 00:13:47.425 04:25:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 2921080 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:47.774 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.775 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:47.775 04:25:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:49.725 00:13:49.725 real 0m20.965s 00:13:49.725 user 0m42.282s 00:13:49.725 sys 0m8.952s 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.725 ************************************ 00:13:49.725 END TEST nvmf_connect_stress 00:13:49.725 ************************************ 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:49.725 ************************************ 00:13:49.725 START TEST nvmf_fused_ordering 00:13:49.725 ************************************ 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:49.725 * Looking for test storage... 00:13:49.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:13:49.725 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:49.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.987 --rc genhtml_branch_coverage=1 00:13:49.987 --rc genhtml_function_coverage=1 00:13:49.987 --rc genhtml_legend=1 00:13:49.987 --rc geninfo_all_blocks=1 00:13:49.987 --rc geninfo_unexecuted_blocks=1 00:13:49.987 00:13:49.987 ' 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:49.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.987 --rc genhtml_branch_coverage=1 00:13:49.987 --rc genhtml_function_coverage=1 00:13:49.987 --rc genhtml_legend=1 00:13:49.987 --rc geninfo_all_blocks=1 00:13:49.987 --rc geninfo_unexecuted_blocks=1 00:13:49.987 00:13:49.987 ' 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:49.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.987 --rc genhtml_branch_coverage=1 00:13:49.987 --rc genhtml_function_coverage=1 00:13:49.987 --rc genhtml_legend=1 00:13:49.987 --rc geninfo_all_blocks=1 00:13:49.987 --rc geninfo_unexecuted_blocks=1 00:13:49.987 00:13:49.987 ' 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:49.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.987 --rc genhtml_branch_coverage=1 00:13:49.987 --rc genhtml_function_coverage=1 00:13:49.987 --rc genhtml_legend=1 00:13:49.987 --rc geninfo_all_blocks=1 00:13:49.987 --rc geninfo_unexecuted_blocks=1 00:13:49.987 00:13:49.987 ' 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.987 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:49.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:49.988 04:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.137 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:58.138 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:58.138 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:58.138 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:58.138 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:58.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:58.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.587 ms 00:13:58.138 00:13:58.138 --- 10.0.0.2 ping statistics --- 00:13:58.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.138 rtt min/avg/max/mdev = 0.587/0.587/0.587/0.000 ms 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:58.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:58.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:13:58.138 00:13:58.138 --- 10.0.0.1 ping statistics --- 00:13:58.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:58.138 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:58.138 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2927521 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2927521 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 2927521 ']' 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:58.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:58.139 04:25:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.139 [2024-11-05 04:25:10.967569] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:13:58.139 [2024-11-05 04:25:10.967636] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:58.139 [2024-11-05 04:25:11.066522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.139 [2024-11-05 04:25:11.117817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:58.139 [2024-11-05 04:25:11.117873] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:58.139 [2024-11-05 04:25:11.117881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:58.139 [2024-11-05 04:25:11.117890] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:58.139 [2024-11-05 04:25:11.117896] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:58.139 [2024-11-05 04:25:11.118698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.401 [2024-11-05 04:25:11.833147] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.401 [2024-11-05 04:25:11.849417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.401 NULL1 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.401 04:25:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:58.401 [2024-11-05 04:25:11.906285] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:13:58.401 [2024-11-05 04:25:11.906329] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927816 ] 00:13:58.974 Attached to nqn.2016-06.io.spdk:cnode1 00:13:58.974 Namespace ID: 1 size: 1GB 00:13:58.974 fused_ordering(0) 00:13:58.974 fused_ordering(1) 00:13:58.974 fused_ordering(2) 00:13:58.974 fused_ordering(3) 00:13:58.974 fused_ordering(4) 00:13:58.974 fused_ordering(5) 00:13:58.974 fused_ordering(6) 00:13:58.974 fused_ordering(7) 00:13:58.974 fused_ordering(8) 00:13:58.974 fused_ordering(9) 00:13:58.974 fused_ordering(10) 00:13:58.974 fused_ordering(11) 00:13:58.974 fused_ordering(12) 00:13:58.974 fused_ordering(13) 00:13:58.974 fused_ordering(14) 00:13:58.974 fused_ordering(15) 00:13:58.974 fused_ordering(16) 00:13:58.974 fused_ordering(17) 00:13:58.974 fused_ordering(18) 00:13:58.974 fused_ordering(19) 00:13:58.974 fused_ordering(20) 00:13:58.974 fused_ordering(21) 00:13:58.974 fused_ordering(22) 00:13:58.974 fused_ordering(23) 00:13:58.974 fused_ordering(24) 00:13:58.974 fused_ordering(25) 00:13:58.974 fused_ordering(26) 00:13:58.974 fused_ordering(27) 00:13:58.974 fused_ordering(28) 00:13:58.974 fused_ordering(29) 00:13:58.974 fused_ordering(30) 00:13:58.974 fused_ordering(31) 00:13:58.974 fused_ordering(32) 00:13:58.974 fused_ordering(33) 00:13:58.974 fused_ordering(34) 00:13:58.974 fused_ordering(35) 00:13:58.974 fused_ordering(36) 00:13:58.974 fused_ordering(37) 00:13:58.974 fused_ordering(38) 00:13:58.974 fused_ordering(39) 00:13:58.974 fused_ordering(40) 00:13:58.974 fused_ordering(41) 00:13:58.974 fused_ordering(42) 00:13:58.974 fused_ordering(43) 00:13:58.974 fused_ordering(44) 00:13:58.974 fused_ordering(45) 00:13:58.974 fused_ordering(46) 00:13:58.974 fused_ordering(47) 00:13:58.974 fused_ordering(48) 00:13:58.974 fused_ordering(49) 00:13:58.974 fused_ordering(50) 00:13:58.974 fused_ordering(51) 00:13:58.974 fused_ordering(52) 00:13:58.974 fused_ordering(53) 00:13:58.974 fused_ordering(54) 00:13:58.974 fused_ordering(55) 00:13:58.974 fused_ordering(56) 00:13:58.974 fused_ordering(57) 00:13:58.974 fused_ordering(58) 00:13:58.974 fused_ordering(59) 00:13:58.974 fused_ordering(60) 00:13:58.974 fused_ordering(61) 00:13:58.974 fused_ordering(62) 00:13:58.974 fused_ordering(63) 00:13:58.974 fused_ordering(64) 00:13:58.974 fused_ordering(65) 00:13:58.974 fused_ordering(66) 00:13:58.974 fused_ordering(67) 00:13:58.974 fused_ordering(68) 00:13:58.974 fused_ordering(69) 00:13:58.974 fused_ordering(70) 00:13:58.974 fused_ordering(71) 00:13:58.974 fused_ordering(72) 00:13:58.974 fused_ordering(73) 00:13:58.974 fused_ordering(74) 00:13:58.974 fused_ordering(75) 00:13:58.974 fused_ordering(76) 00:13:58.974 fused_ordering(77) 00:13:58.974 fused_ordering(78) 00:13:58.974 fused_ordering(79) 00:13:58.974 fused_ordering(80) 00:13:58.974 fused_ordering(81) 00:13:58.974 fused_ordering(82) 00:13:58.974 fused_ordering(83) 00:13:58.974 fused_ordering(84) 00:13:58.974 fused_ordering(85) 00:13:58.974 fused_ordering(86) 00:13:58.974 fused_ordering(87) 00:13:58.974 fused_ordering(88) 00:13:58.974 fused_ordering(89) 00:13:58.974 fused_ordering(90) 00:13:58.974 fused_ordering(91) 00:13:58.974 fused_ordering(92) 00:13:58.975 fused_ordering(93) 00:13:58.975 fused_ordering(94) 00:13:58.975 fused_ordering(95) 00:13:58.975 fused_ordering(96) 00:13:58.975 fused_ordering(97) 00:13:58.975 fused_ordering(98) 00:13:58.975 fused_ordering(99) 00:13:58.975 fused_ordering(100) 00:13:58.975 fused_ordering(101) 00:13:58.975 fused_ordering(102) 00:13:58.975 fused_ordering(103) 00:13:58.975 fused_ordering(104) 00:13:58.975 fused_ordering(105) 00:13:58.975 fused_ordering(106) 00:13:58.975 fused_ordering(107) 00:13:58.975 fused_ordering(108) 00:13:58.975 fused_ordering(109) 00:13:58.975 fused_ordering(110) 00:13:58.975 fused_ordering(111) 00:13:58.975 fused_ordering(112) 00:13:58.975 fused_ordering(113) 00:13:58.975 fused_ordering(114) 00:13:58.975 fused_ordering(115) 00:13:58.975 fused_ordering(116) 00:13:58.975 fused_ordering(117) 00:13:58.975 fused_ordering(118) 00:13:58.975 fused_ordering(119) 00:13:58.975 fused_ordering(120) 00:13:58.975 fused_ordering(121) 00:13:58.975 fused_ordering(122) 00:13:58.975 fused_ordering(123) 00:13:58.975 fused_ordering(124) 00:13:58.975 fused_ordering(125) 00:13:58.975 fused_ordering(126) 00:13:58.975 fused_ordering(127) 00:13:58.975 fused_ordering(128) 00:13:58.975 fused_ordering(129) 00:13:58.975 fused_ordering(130) 00:13:58.975 fused_ordering(131) 00:13:58.975 fused_ordering(132) 00:13:58.975 fused_ordering(133) 00:13:58.975 fused_ordering(134) 00:13:58.975 fused_ordering(135) 00:13:58.975 fused_ordering(136) 00:13:58.975 fused_ordering(137) 00:13:58.975 fused_ordering(138) 00:13:58.975 fused_ordering(139) 00:13:58.975 fused_ordering(140) 00:13:58.975 fused_ordering(141) 00:13:58.975 fused_ordering(142) 00:13:58.975 fused_ordering(143) 00:13:58.975 fused_ordering(144) 00:13:58.975 fused_ordering(145) 00:13:58.975 fused_ordering(146) 00:13:58.975 fused_ordering(147) 00:13:58.975 fused_ordering(148) 00:13:58.975 fused_ordering(149) 00:13:58.975 fused_ordering(150) 00:13:58.975 fused_ordering(151) 00:13:58.975 fused_ordering(152) 00:13:58.975 fused_ordering(153) 00:13:58.975 fused_ordering(154) 00:13:58.975 fused_ordering(155) 00:13:58.975 fused_ordering(156) 00:13:58.975 fused_ordering(157) 00:13:58.975 fused_ordering(158) 00:13:58.975 fused_ordering(159) 00:13:58.975 fused_ordering(160) 00:13:58.975 fused_ordering(161) 00:13:58.975 fused_ordering(162) 00:13:58.975 fused_ordering(163) 00:13:58.975 fused_ordering(164) 00:13:58.975 fused_ordering(165) 00:13:58.975 fused_ordering(166) 00:13:58.975 fused_ordering(167) 00:13:58.975 fused_ordering(168) 00:13:58.975 fused_ordering(169) 00:13:58.975 fused_ordering(170) 00:13:58.975 fused_ordering(171) 00:13:58.975 fused_ordering(172) 00:13:58.975 fused_ordering(173) 00:13:58.975 fused_ordering(174) 00:13:58.975 fused_ordering(175) 00:13:58.975 fused_ordering(176) 00:13:58.975 fused_ordering(177) 00:13:58.975 fused_ordering(178) 00:13:58.975 fused_ordering(179) 00:13:58.975 fused_ordering(180) 00:13:58.975 fused_ordering(181) 00:13:58.975 fused_ordering(182) 00:13:58.975 fused_ordering(183) 00:13:58.975 fused_ordering(184) 00:13:58.975 fused_ordering(185) 00:13:58.975 fused_ordering(186) 00:13:58.975 fused_ordering(187) 00:13:58.975 fused_ordering(188) 00:13:58.975 fused_ordering(189) 00:13:58.975 fused_ordering(190) 00:13:58.975 fused_ordering(191) 00:13:58.975 fused_ordering(192) 00:13:58.975 fused_ordering(193) 00:13:58.975 fused_ordering(194) 00:13:58.975 fused_ordering(195) 00:13:58.975 fused_ordering(196) 00:13:58.975 fused_ordering(197) 00:13:58.975 fused_ordering(198) 00:13:58.975 fused_ordering(199) 00:13:58.975 fused_ordering(200) 00:13:58.975 fused_ordering(201) 00:13:58.975 fused_ordering(202) 00:13:58.975 fused_ordering(203) 00:13:58.975 fused_ordering(204) 00:13:58.975 fused_ordering(205) 00:13:59.237 fused_ordering(206) 00:13:59.237 fused_ordering(207) 00:13:59.237 fused_ordering(208) 00:13:59.237 fused_ordering(209) 00:13:59.237 fused_ordering(210) 00:13:59.237 fused_ordering(211) 00:13:59.237 fused_ordering(212) 00:13:59.237 fused_ordering(213) 00:13:59.237 fused_ordering(214) 00:13:59.237 fused_ordering(215) 00:13:59.237 fused_ordering(216) 00:13:59.237 fused_ordering(217) 00:13:59.237 fused_ordering(218) 00:13:59.237 fused_ordering(219) 00:13:59.237 fused_ordering(220) 00:13:59.237 fused_ordering(221) 00:13:59.237 fused_ordering(222) 00:13:59.237 fused_ordering(223) 00:13:59.237 fused_ordering(224) 00:13:59.237 fused_ordering(225) 00:13:59.237 fused_ordering(226) 00:13:59.237 fused_ordering(227) 00:13:59.237 fused_ordering(228) 00:13:59.237 fused_ordering(229) 00:13:59.237 fused_ordering(230) 00:13:59.237 fused_ordering(231) 00:13:59.237 fused_ordering(232) 00:13:59.237 fused_ordering(233) 00:13:59.237 fused_ordering(234) 00:13:59.237 fused_ordering(235) 00:13:59.237 fused_ordering(236) 00:13:59.237 fused_ordering(237) 00:13:59.237 fused_ordering(238) 00:13:59.237 fused_ordering(239) 00:13:59.237 fused_ordering(240) 00:13:59.237 fused_ordering(241) 00:13:59.237 fused_ordering(242) 00:13:59.237 fused_ordering(243) 00:13:59.237 fused_ordering(244) 00:13:59.237 fused_ordering(245) 00:13:59.237 fused_ordering(246) 00:13:59.237 fused_ordering(247) 00:13:59.237 fused_ordering(248) 00:13:59.237 fused_ordering(249) 00:13:59.237 fused_ordering(250) 00:13:59.237 fused_ordering(251) 00:13:59.237 fused_ordering(252) 00:13:59.237 fused_ordering(253) 00:13:59.237 fused_ordering(254) 00:13:59.237 fused_ordering(255) 00:13:59.237 fused_ordering(256) 00:13:59.237 fused_ordering(257) 00:13:59.237 fused_ordering(258) 00:13:59.237 fused_ordering(259) 00:13:59.237 fused_ordering(260) 00:13:59.237 fused_ordering(261) 00:13:59.237 fused_ordering(262) 00:13:59.237 fused_ordering(263) 00:13:59.237 fused_ordering(264) 00:13:59.237 fused_ordering(265) 00:13:59.237 fused_ordering(266) 00:13:59.237 fused_ordering(267) 00:13:59.237 fused_ordering(268) 00:13:59.237 fused_ordering(269) 00:13:59.237 fused_ordering(270) 00:13:59.237 fused_ordering(271) 00:13:59.237 fused_ordering(272) 00:13:59.237 fused_ordering(273) 00:13:59.237 fused_ordering(274) 00:13:59.237 fused_ordering(275) 00:13:59.237 fused_ordering(276) 00:13:59.237 fused_ordering(277) 00:13:59.237 fused_ordering(278) 00:13:59.237 fused_ordering(279) 00:13:59.237 fused_ordering(280) 00:13:59.237 fused_ordering(281) 00:13:59.237 fused_ordering(282) 00:13:59.237 fused_ordering(283) 00:13:59.237 fused_ordering(284) 00:13:59.237 fused_ordering(285) 00:13:59.237 fused_ordering(286) 00:13:59.237 fused_ordering(287) 00:13:59.237 fused_ordering(288) 00:13:59.237 fused_ordering(289) 00:13:59.237 fused_ordering(290) 00:13:59.237 fused_ordering(291) 00:13:59.237 fused_ordering(292) 00:13:59.237 fused_ordering(293) 00:13:59.237 fused_ordering(294) 00:13:59.237 fused_ordering(295) 00:13:59.237 fused_ordering(296) 00:13:59.237 fused_ordering(297) 00:13:59.237 fused_ordering(298) 00:13:59.237 fused_ordering(299) 00:13:59.237 fused_ordering(300) 00:13:59.237 fused_ordering(301) 00:13:59.237 fused_ordering(302) 00:13:59.237 fused_ordering(303) 00:13:59.237 fused_ordering(304) 00:13:59.237 fused_ordering(305) 00:13:59.237 fused_ordering(306) 00:13:59.237 fused_ordering(307) 00:13:59.237 fused_ordering(308) 00:13:59.237 fused_ordering(309) 00:13:59.237 fused_ordering(310) 00:13:59.237 fused_ordering(311) 00:13:59.237 fused_ordering(312) 00:13:59.237 fused_ordering(313) 00:13:59.237 fused_ordering(314) 00:13:59.237 fused_ordering(315) 00:13:59.237 fused_ordering(316) 00:13:59.237 fused_ordering(317) 00:13:59.237 fused_ordering(318) 00:13:59.237 fused_ordering(319) 00:13:59.237 fused_ordering(320) 00:13:59.237 fused_ordering(321) 00:13:59.237 fused_ordering(322) 00:13:59.237 fused_ordering(323) 00:13:59.237 fused_ordering(324) 00:13:59.237 fused_ordering(325) 00:13:59.237 fused_ordering(326) 00:13:59.237 fused_ordering(327) 00:13:59.237 fused_ordering(328) 00:13:59.237 fused_ordering(329) 00:13:59.237 fused_ordering(330) 00:13:59.237 fused_ordering(331) 00:13:59.237 fused_ordering(332) 00:13:59.237 fused_ordering(333) 00:13:59.237 fused_ordering(334) 00:13:59.237 fused_ordering(335) 00:13:59.237 fused_ordering(336) 00:13:59.237 fused_ordering(337) 00:13:59.237 fused_ordering(338) 00:13:59.237 fused_ordering(339) 00:13:59.237 fused_ordering(340) 00:13:59.238 fused_ordering(341) 00:13:59.238 fused_ordering(342) 00:13:59.238 fused_ordering(343) 00:13:59.238 fused_ordering(344) 00:13:59.238 fused_ordering(345) 00:13:59.238 fused_ordering(346) 00:13:59.238 fused_ordering(347) 00:13:59.238 fused_ordering(348) 00:13:59.238 fused_ordering(349) 00:13:59.238 fused_ordering(350) 00:13:59.238 fused_ordering(351) 00:13:59.238 fused_ordering(352) 00:13:59.238 fused_ordering(353) 00:13:59.238 fused_ordering(354) 00:13:59.238 fused_ordering(355) 00:13:59.238 fused_ordering(356) 00:13:59.238 fused_ordering(357) 00:13:59.238 fused_ordering(358) 00:13:59.238 fused_ordering(359) 00:13:59.238 fused_ordering(360) 00:13:59.238 fused_ordering(361) 00:13:59.238 fused_ordering(362) 00:13:59.238 fused_ordering(363) 00:13:59.238 fused_ordering(364) 00:13:59.238 fused_ordering(365) 00:13:59.238 fused_ordering(366) 00:13:59.238 fused_ordering(367) 00:13:59.238 fused_ordering(368) 00:13:59.238 fused_ordering(369) 00:13:59.238 fused_ordering(370) 00:13:59.238 fused_ordering(371) 00:13:59.238 fused_ordering(372) 00:13:59.238 fused_ordering(373) 00:13:59.238 fused_ordering(374) 00:13:59.238 fused_ordering(375) 00:13:59.238 fused_ordering(376) 00:13:59.238 fused_ordering(377) 00:13:59.238 fused_ordering(378) 00:13:59.238 fused_ordering(379) 00:13:59.238 fused_ordering(380) 00:13:59.238 fused_ordering(381) 00:13:59.238 fused_ordering(382) 00:13:59.238 fused_ordering(383) 00:13:59.238 fused_ordering(384) 00:13:59.238 fused_ordering(385) 00:13:59.238 fused_ordering(386) 00:13:59.238 fused_ordering(387) 00:13:59.238 fused_ordering(388) 00:13:59.238 fused_ordering(389) 00:13:59.238 fused_ordering(390) 00:13:59.238 fused_ordering(391) 00:13:59.238 fused_ordering(392) 00:13:59.238 fused_ordering(393) 00:13:59.238 fused_ordering(394) 00:13:59.238 fused_ordering(395) 00:13:59.238 fused_ordering(396) 00:13:59.238 fused_ordering(397) 00:13:59.238 fused_ordering(398) 00:13:59.238 fused_ordering(399) 00:13:59.238 fused_ordering(400) 00:13:59.238 fused_ordering(401) 00:13:59.238 fused_ordering(402) 00:13:59.238 fused_ordering(403) 00:13:59.238 fused_ordering(404) 00:13:59.238 fused_ordering(405) 00:13:59.238 fused_ordering(406) 00:13:59.238 fused_ordering(407) 00:13:59.238 fused_ordering(408) 00:13:59.238 fused_ordering(409) 00:13:59.238 fused_ordering(410) 00:13:59.499 fused_ordering(411) 00:13:59.499 fused_ordering(412) 00:13:59.499 fused_ordering(413) 00:13:59.499 fused_ordering(414) 00:13:59.499 fused_ordering(415) 00:13:59.499 fused_ordering(416) 00:13:59.499 fused_ordering(417) 00:13:59.499 fused_ordering(418) 00:13:59.499 fused_ordering(419) 00:13:59.499 fused_ordering(420) 00:13:59.499 fused_ordering(421) 00:13:59.499 fused_ordering(422) 00:13:59.499 fused_ordering(423) 00:13:59.499 fused_ordering(424) 00:13:59.499 fused_ordering(425) 00:13:59.499 fused_ordering(426) 00:13:59.499 fused_ordering(427) 00:13:59.499 fused_ordering(428) 00:13:59.499 fused_ordering(429) 00:13:59.499 fused_ordering(430) 00:13:59.499 fused_ordering(431) 00:13:59.499 fused_ordering(432) 00:13:59.499 fused_ordering(433) 00:13:59.499 fused_ordering(434) 00:13:59.499 fused_ordering(435) 00:13:59.499 fused_ordering(436) 00:13:59.499 fused_ordering(437) 00:13:59.499 fused_ordering(438) 00:13:59.499 fused_ordering(439) 00:13:59.499 fused_ordering(440) 00:13:59.499 fused_ordering(441) 00:13:59.499 fused_ordering(442) 00:13:59.499 fused_ordering(443) 00:13:59.499 fused_ordering(444) 00:13:59.499 fused_ordering(445) 00:13:59.499 fused_ordering(446) 00:13:59.499 fused_ordering(447) 00:13:59.499 fused_ordering(448) 00:13:59.499 fused_ordering(449) 00:13:59.499 fused_ordering(450) 00:13:59.499 fused_ordering(451) 00:13:59.499 fused_ordering(452) 00:13:59.499 fused_ordering(453) 00:13:59.499 fused_ordering(454) 00:13:59.499 fused_ordering(455) 00:13:59.499 fused_ordering(456) 00:13:59.499 fused_ordering(457) 00:13:59.499 fused_ordering(458) 00:13:59.499 fused_ordering(459) 00:13:59.499 fused_ordering(460) 00:13:59.500 fused_ordering(461) 00:13:59.500 fused_ordering(462) 00:13:59.500 fused_ordering(463) 00:13:59.500 fused_ordering(464) 00:13:59.500 fused_ordering(465) 00:13:59.500 fused_ordering(466) 00:13:59.500 fused_ordering(467) 00:13:59.500 fused_ordering(468) 00:13:59.500 fused_ordering(469) 00:13:59.500 fused_ordering(470) 00:13:59.500 fused_ordering(471) 00:13:59.500 fused_ordering(472) 00:13:59.500 fused_ordering(473) 00:13:59.500 fused_ordering(474) 00:13:59.500 fused_ordering(475) 00:13:59.500 fused_ordering(476) 00:13:59.500 fused_ordering(477) 00:13:59.500 fused_ordering(478) 00:13:59.500 fused_ordering(479) 00:13:59.500 fused_ordering(480) 00:13:59.500 fused_ordering(481) 00:13:59.500 fused_ordering(482) 00:13:59.500 fused_ordering(483) 00:13:59.500 fused_ordering(484) 00:13:59.500 fused_ordering(485) 00:13:59.500 fused_ordering(486) 00:13:59.500 fused_ordering(487) 00:13:59.500 fused_ordering(488) 00:13:59.500 fused_ordering(489) 00:13:59.500 fused_ordering(490) 00:13:59.500 fused_ordering(491) 00:13:59.500 fused_ordering(492) 00:13:59.500 fused_ordering(493) 00:13:59.500 fused_ordering(494) 00:13:59.500 fused_ordering(495) 00:13:59.500 fused_ordering(496) 00:13:59.500 fused_ordering(497) 00:13:59.500 fused_ordering(498) 00:13:59.500 fused_ordering(499) 00:13:59.500 fused_ordering(500) 00:13:59.500 fused_ordering(501) 00:13:59.500 fused_ordering(502) 00:13:59.500 fused_ordering(503) 00:13:59.500 fused_ordering(504) 00:13:59.500 fused_ordering(505) 00:13:59.500 fused_ordering(506) 00:13:59.500 fused_ordering(507) 00:13:59.500 fused_ordering(508) 00:13:59.500 fused_ordering(509) 00:13:59.500 fused_ordering(510) 00:13:59.500 fused_ordering(511) 00:13:59.500 fused_ordering(512) 00:13:59.500 fused_ordering(513) 00:13:59.500 fused_ordering(514) 00:13:59.500 fused_ordering(515) 00:13:59.500 fused_ordering(516) 00:13:59.500 fused_ordering(517) 00:13:59.500 fused_ordering(518) 00:13:59.500 fused_ordering(519) 00:13:59.500 fused_ordering(520) 00:13:59.500 fused_ordering(521) 00:13:59.500 fused_ordering(522) 00:13:59.500 fused_ordering(523) 00:13:59.500 fused_ordering(524) 00:13:59.500 fused_ordering(525) 00:13:59.500 fused_ordering(526) 00:13:59.500 fused_ordering(527) 00:13:59.500 fused_ordering(528) 00:13:59.500 fused_ordering(529) 00:13:59.500 fused_ordering(530) 00:13:59.500 fused_ordering(531) 00:13:59.500 fused_ordering(532) 00:13:59.500 fused_ordering(533) 00:13:59.500 fused_ordering(534) 00:13:59.500 fused_ordering(535) 00:13:59.500 fused_ordering(536) 00:13:59.500 fused_ordering(537) 00:13:59.500 fused_ordering(538) 00:13:59.500 fused_ordering(539) 00:13:59.500 fused_ordering(540) 00:13:59.500 fused_ordering(541) 00:13:59.500 fused_ordering(542) 00:13:59.500 fused_ordering(543) 00:13:59.500 fused_ordering(544) 00:13:59.500 fused_ordering(545) 00:13:59.500 fused_ordering(546) 00:13:59.500 fused_ordering(547) 00:13:59.500 fused_ordering(548) 00:13:59.500 fused_ordering(549) 00:13:59.500 fused_ordering(550) 00:13:59.500 fused_ordering(551) 00:13:59.500 fused_ordering(552) 00:13:59.500 fused_ordering(553) 00:13:59.500 fused_ordering(554) 00:13:59.500 fused_ordering(555) 00:13:59.500 fused_ordering(556) 00:13:59.500 fused_ordering(557) 00:13:59.500 fused_ordering(558) 00:13:59.500 fused_ordering(559) 00:13:59.500 fused_ordering(560) 00:13:59.500 fused_ordering(561) 00:13:59.500 fused_ordering(562) 00:13:59.500 fused_ordering(563) 00:13:59.500 fused_ordering(564) 00:13:59.500 fused_ordering(565) 00:13:59.500 fused_ordering(566) 00:13:59.500 fused_ordering(567) 00:13:59.500 fused_ordering(568) 00:13:59.500 fused_ordering(569) 00:13:59.500 fused_ordering(570) 00:13:59.500 fused_ordering(571) 00:13:59.500 fused_ordering(572) 00:13:59.500 fused_ordering(573) 00:13:59.500 fused_ordering(574) 00:13:59.500 fused_ordering(575) 00:13:59.500 fused_ordering(576) 00:13:59.500 fused_ordering(577) 00:13:59.500 fused_ordering(578) 00:13:59.500 fused_ordering(579) 00:13:59.500 fused_ordering(580) 00:13:59.500 fused_ordering(581) 00:13:59.500 fused_ordering(582) 00:13:59.500 fused_ordering(583) 00:13:59.500 fused_ordering(584) 00:13:59.500 fused_ordering(585) 00:13:59.500 fused_ordering(586) 00:13:59.500 fused_ordering(587) 00:13:59.500 fused_ordering(588) 00:13:59.500 fused_ordering(589) 00:13:59.500 fused_ordering(590) 00:13:59.500 fused_ordering(591) 00:13:59.500 fused_ordering(592) 00:13:59.500 fused_ordering(593) 00:13:59.500 fused_ordering(594) 00:13:59.500 fused_ordering(595) 00:13:59.500 fused_ordering(596) 00:13:59.500 fused_ordering(597) 00:13:59.500 fused_ordering(598) 00:13:59.500 fused_ordering(599) 00:13:59.500 fused_ordering(600) 00:13:59.500 fused_ordering(601) 00:13:59.500 fused_ordering(602) 00:13:59.500 fused_ordering(603) 00:13:59.500 fused_ordering(604) 00:13:59.500 fused_ordering(605) 00:13:59.500 fused_ordering(606) 00:13:59.500 fused_ordering(607) 00:13:59.500 fused_ordering(608) 00:13:59.500 fused_ordering(609) 00:13:59.500 fused_ordering(610) 00:13:59.500 fused_ordering(611) 00:13:59.500 fused_ordering(612) 00:13:59.500 fused_ordering(613) 00:13:59.501 fused_ordering(614) 00:13:59.501 fused_ordering(615) 00:14:00.076 fused_ordering(616) 00:14:00.076 fused_ordering(617) 00:14:00.076 fused_ordering(618) 00:14:00.076 fused_ordering(619) 00:14:00.076 fused_ordering(620) 00:14:00.076 fused_ordering(621) 00:14:00.076 fused_ordering(622) 00:14:00.076 fused_ordering(623) 00:14:00.076 fused_ordering(624) 00:14:00.076 fused_ordering(625) 00:14:00.076 fused_ordering(626) 00:14:00.076 fused_ordering(627) 00:14:00.076 fused_ordering(628) 00:14:00.076 fused_ordering(629) 00:14:00.076 fused_ordering(630) 00:14:00.076 fused_ordering(631) 00:14:00.076 fused_ordering(632) 00:14:00.076 fused_ordering(633) 00:14:00.076 fused_ordering(634) 00:14:00.076 fused_ordering(635) 00:14:00.076 fused_ordering(636) 00:14:00.076 fused_ordering(637) 00:14:00.076 fused_ordering(638) 00:14:00.076 fused_ordering(639) 00:14:00.076 fused_ordering(640) 00:14:00.076 fused_ordering(641) 00:14:00.076 fused_ordering(642) 00:14:00.076 fused_ordering(643) 00:14:00.076 fused_ordering(644) 00:14:00.076 fused_ordering(645) 00:14:00.076 fused_ordering(646) 00:14:00.076 fused_ordering(647) 00:14:00.076 fused_ordering(648) 00:14:00.076 fused_ordering(649) 00:14:00.076 fused_ordering(650) 00:14:00.076 fused_ordering(651) 00:14:00.076 fused_ordering(652) 00:14:00.076 fused_ordering(653) 00:14:00.076 fused_ordering(654) 00:14:00.076 fused_ordering(655) 00:14:00.076 fused_ordering(656) 00:14:00.076 fused_ordering(657) 00:14:00.076 fused_ordering(658) 00:14:00.076 fused_ordering(659) 00:14:00.076 fused_ordering(660) 00:14:00.076 fused_ordering(661) 00:14:00.076 fused_ordering(662) 00:14:00.076 fused_ordering(663) 00:14:00.076 fused_ordering(664) 00:14:00.076 fused_ordering(665) 00:14:00.076 fused_ordering(666) 00:14:00.076 fused_ordering(667) 00:14:00.076 fused_ordering(668) 00:14:00.076 fused_ordering(669) 00:14:00.076 fused_ordering(670) 00:14:00.076 fused_ordering(671) 00:14:00.076 fused_ordering(672) 00:14:00.076 fused_ordering(673) 00:14:00.076 fused_ordering(674) 00:14:00.076 fused_ordering(675) 00:14:00.076 fused_ordering(676) 00:14:00.076 fused_ordering(677) 00:14:00.076 fused_ordering(678) 00:14:00.076 fused_ordering(679) 00:14:00.076 fused_ordering(680) 00:14:00.076 fused_ordering(681) 00:14:00.076 fused_ordering(682) 00:14:00.076 fused_ordering(683) 00:14:00.076 fused_ordering(684) 00:14:00.076 fused_ordering(685) 00:14:00.076 fused_ordering(686) 00:14:00.076 fused_ordering(687) 00:14:00.076 fused_ordering(688) 00:14:00.076 fused_ordering(689) 00:14:00.076 fused_ordering(690) 00:14:00.076 fused_ordering(691) 00:14:00.076 fused_ordering(692) 00:14:00.076 fused_ordering(693) 00:14:00.076 fused_ordering(694) 00:14:00.076 fused_ordering(695) 00:14:00.076 fused_ordering(696) 00:14:00.076 fused_ordering(697) 00:14:00.076 fused_ordering(698) 00:14:00.076 fused_ordering(699) 00:14:00.076 fused_ordering(700) 00:14:00.076 fused_ordering(701) 00:14:00.076 fused_ordering(702) 00:14:00.076 fused_ordering(703) 00:14:00.076 fused_ordering(704) 00:14:00.076 fused_ordering(705) 00:14:00.076 fused_ordering(706) 00:14:00.076 fused_ordering(707) 00:14:00.076 fused_ordering(708) 00:14:00.076 fused_ordering(709) 00:14:00.076 fused_ordering(710) 00:14:00.076 fused_ordering(711) 00:14:00.076 fused_ordering(712) 00:14:00.076 fused_ordering(713) 00:14:00.076 fused_ordering(714) 00:14:00.076 fused_ordering(715) 00:14:00.076 fused_ordering(716) 00:14:00.076 fused_ordering(717) 00:14:00.076 fused_ordering(718) 00:14:00.076 fused_ordering(719) 00:14:00.076 fused_ordering(720) 00:14:00.076 fused_ordering(721) 00:14:00.076 fused_ordering(722) 00:14:00.076 fused_ordering(723) 00:14:00.076 fused_ordering(724) 00:14:00.076 fused_ordering(725) 00:14:00.076 fused_ordering(726) 00:14:00.076 fused_ordering(727) 00:14:00.076 fused_ordering(728) 00:14:00.076 fused_ordering(729) 00:14:00.076 fused_ordering(730) 00:14:00.076 fused_ordering(731) 00:14:00.076 fused_ordering(732) 00:14:00.076 fused_ordering(733) 00:14:00.076 fused_ordering(734) 00:14:00.076 fused_ordering(735) 00:14:00.076 fused_ordering(736) 00:14:00.076 fused_ordering(737) 00:14:00.076 fused_ordering(738) 00:14:00.076 fused_ordering(739) 00:14:00.076 fused_ordering(740) 00:14:00.076 fused_ordering(741) 00:14:00.076 fused_ordering(742) 00:14:00.076 fused_ordering(743) 00:14:00.076 fused_ordering(744) 00:14:00.076 fused_ordering(745) 00:14:00.076 fused_ordering(746) 00:14:00.076 fused_ordering(747) 00:14:00.076 fused_ordering(748) 00:14:00.076 fused_ordering(749) 00:14:00.076 fused_ordering(750) 00:14:00.076 fused_ordering(751) 00:14:00.076 fused_ordering(752) 00:14:00.076 fused_ordering(753) 00:14:00.076 fused_ordering(754) 00:14:00.076 fused_ordering(755) 00:14:00.076 fused_ordering(756) 00:14:00.076 fused_ordering(757) 00:14:00.076 fused_ordering(758) 00:14:00.076 fused_ordering(759) 00:14:00.076 fused_ordering(760) 00:14:00.076 fused_ordering(761) 00:14:00.076 fused_ordering(762) 00:14:00.076 fused_ordering(763) 00:14:00.076 fused_ordering(764) 00:14:00.076 fused_ordering(765) 00:14:00.076 fused_ordering(766) 00:14:00.076 fused_ordering(767) 00:14:00.076 fused_ordering(768) 00:14:00.076 fused_ordering(769) 00:14:00.076 fused_ordering(770) 00:14:00.076 fused_ordering(771) 00:14:00.076 fused_ordering(772) 00:14:00.076 fused_ordering(773) 00:14:00.076 fused_ordering(774) 00:14:00.076 fused_ordering(775) 00:14:00.076 fused_ordering(776) 00:14:00.076 fused_ordering(777) 00:14:00.076 fused_ordering(778) 00:14:00.076 fused_ordering(779) 00:14:00.076 fused_ordering(780) 00:14:00.076 fused_ordering(781) 00:14:00.076 fused_ordering(782) 00:14:00.076 fused_ordering(783) 00:14:00.076 fused_ordering(784) 00:14:00.076 fused_ordering(785) 00:14:00.076 fused_ordering(786) 00:14:00.076 fused_ordering(787) 00:14:00.076 fused_ordering(788) 00:14:00.076 fused_ordering(789) 00:14:00.076 fused_ordering(790) 00:14:00.076 fused_ordering(791) 00:14:00.076 fused_ordering(792) 00:14:00.076 fused_ordering(793) 00:14:00.076 fused_ordering(794) 00:14:00.076 fused_ordering(795) 00:14:00.076 fused_ordering(796) 00:14:00.076 fused_ordering(797) 00:14:00.076 fused_ordering(798) 00:14:00.076 fused_ordering(799) 00:14:00.076 fused_ordering(800) 00:14:00.076 fused_ordering(801) 00:14:00.076 fused_ordering(802) 00:14:00.076 fused_ordering(803) 00:14:00.076 fused_ordering(804) 00:14:00.076 fused_ordering(805) 00:14:00.076 fused_ordering(806) 00:14:00.076 fused_ordering(807) 00:14:00.076 fused_ordering(808) 00:14:00.076 fused_ordering(809) 00:14:00.076 fused_ordering(810) 00:14:00.076 fused_ordering(811) 00:14:00.076 fused_ordering(812) 00:14:00.076 fused_ordering(813) 00:14:00.076 fused_ordering(814) 00:14:00.076 fused_ordering(815) 00:14:00.076 fused_ordering(816) 00:14:00.076 fused_ordering(817) 00:14:00.076 fused_ordering(818) 00:14:00.076 fused_ordering(819) 00:14:00.076 fused_ordering(820) 00:14:00.650 fused_ordering(821) 00:14:00.650 fused_ordering(822) 00:14:00.650 fused_ordering(823) 00:14:00.650 fused_ordering(824) 00:14:00.650 fused_ordering(825) 00:14:00.650 fused_ordering(826) 00:14:00.650 fused_ordering(827) 00:14:00.650 fused_ordering(828) 00:14:00.650 fused_ordering(829) 00:14:00.650 fused_ordering(830) 00:14:00.650 fused_ordering(831) 00:14:00.650 fused_ordering(832) 00:14:00.650 fused_ordering(833) 00:14:00.650 fused_ordering(834) 00:14:00.650 fused_ordering(835) 00:14:00.650 fused_ordering(836) 00:14:00.650 fused_ordering(837) 00:14:00.650 fused_ordering(838) 00:14:00.650 fused_ordering(839) 00:14:00.650 fused_ordering(840) 00:14:00.650 fused_ordering(841) 00:14:00.650 fused_ordering(842) 00:14:00.650 fused_ordering(843) 00:14:00.650 fused_ordering(844) 00:14:00.650 fused_ordering(845) 00:14:00.650 fused_ordering(846) 00:14:00.650 fused_ordering(847) 00:14:00.650 fused_ordering(848) 00:14:00.650 fused_ordering(849) 00:14:00.650 fused_ordering(850) 00:14:00.650 fused_ordering(851) 00:14:00.650 fused_ordering(852) 00:14:00.650 fused_ordering(853) 00:14:00.650 fused_ordering(854) 00:14:00.650 fused_ordering(855) 00:14:00.650 fused_ordering(856) 00:14:00.650 fused_ordering(857) 00:14:00.650 fused_ordering(858) 00:14:00.650 fused_ordering(859) 00:14:00.650 fused_ordering(860) 00:14:00.650 fused_ordering(861) 00:14:00.650 fused_ordering(862) 00:14:00.650 fused_ordering(863) 00:14:00.650 fused_ordering(864) 00:14:00.650 fused_ordering(865) 00:14:00.650 fused_ordering(866) 00:14:00.650 fused_ordering(867) 00:14:00.650 fused_ordering(868) 00:14:00.650 fused_ordering(869) 00:14:00.650 fused_ordering(870) 00:14:00.650 fused_ordering(871) 00:14:00.650 fused_ordering(872) 00:14:00.650 fused_ordering(873) 00:14:00.650 fused_ordering(874) 00:14:00.650 fused_ordering(875) 00:14:00.650 fused_ordering(876) 00:14:00.650 fused_ordering(877) 00:14:00.650 fused_ordering(878) 00:14:00.650 fused_ordering(879) 00:14:00.650 fused_ordering(880) 00:14:00.650 fused_ordering(881) 00:14:00.650 fused_ordering(882) 00:14:00.650 fused_ordering(883) 00:14:00.650 fused_ordering(884) 00:14:00.650 fused_ordering(885) 00:14:00.650 fused_ordering(886) 00:14:00.650 fused_ordering(887) 00:14:00.650 fused_ordering(888) 00:14:00.650 fused_ordering(889) 00:14:00.650 fused_ordering(890) 00:14:00.650 fused_ordering(891) 00:14:00.650 fused_ordering(892) 00:14:00.650 fused_ordering(893) 00:14:00.650 fused_ordering(894) 00:14:00.650 fused_ordering(895) 00:14:00.650 fused_ordering(896) 00:14:00.650 fused_ordering(897) 00:14:00.650 fused_ordering(898) 00:14:00.650 fused_ordering(899) 00:14:00.650 fused_ordering(900) 00:14:00.650 fused_ordering(901) 00:14:00.650 fused_ordering(902) 00:14:00.650 fused_ordering(903) 00:14:00.650 fused_ordering(904) 00:14:00.650 fused_ordering(905) 00:14:00.650 fused_ordering(906) 00:14:00.650 fused_ordering(907) 00:14:00.650 fused_ordering(908) 00:14:00.650 fused_ordering(909) 00:14:00.650 fused_ordering(910) 00:14:00.650 fused_ordering(911) 00:14:00.650 fused_ordering(912) 00:14:00.650 fused_ordering(913) 00:14:00.650 fused_ordering(914) 00:14:00.650 fused_ordering(915) 00:14:00.650 fused_ordering(916) 00:14:00.650 fused_ordering(917) 00:14:00.650 fused_ordering(918) 00:14:00.650 fused_ordering(919) 00:14:00.650 fused_ordering(920) 00:14:00.650 fused_ordering(921) 00:14:00.650 fused_ordering(922) 00:14:00.650 fused_ordering(923) 00:14:00.650 fused_ordering(924) 00:14:00.650 fused_ordering(925) 00:14:00.650 fused_ordering(926) 00:14:00.650 fused_ordering(927) 00:14:00.650 fused_ordering(928) 00:14:00.650 fused_ordering(929) 00:14:00.650 fused_ordering(930) 00:14:00.650 fused_ordering(931) 00:14:00.650 fused_ordering(932) 00:14:00.650 fused_ordering(933) 00:14:00.650 fused_ordering(934) 00:14:00.650 fused_ordering(935) 00:14:00.650 fused_ordering(936) 00:14:00.650 fused_ordering(937) 00:14:00.650 fused_ordering(938) 00:14:00.650 fused_ordering(939) 00:14:00.650 fused_ordering(940) 00:14:00.650 fused_ordering(941) 00:14:00.650 fused_ordering(942) 00:14:00.650 fused_ordering(943) 00:14:00.650 fused_ordering(944) 00:14:00.650 fused_ordering(945) 00:14:00.650 fused_ordering(946) 00:14:00.650 fused_ordering(947) 00:14:00.650 fused_ordering(948) 00:14:00.650 fused_ordering(949) 00:14:00.650 fused_ordering(950) 00:14:00.650 fused_ordering(951) 00:14:00.650 fused_ordering(952) 00:14:00.650 fused_ordering(953) 00:14:00.650 fused_ordering(954) 00:14:00.650 fused_ordering(955) 00:14:00.650 fused_ordering(956) 00:14:00.650 fused_ordering(957) 00:14:00.650 fused_ordering(958) 00:14:00.650 fused_ordering(959) 00:14:00.650 fused_ordering(960) 00:14:00.650 fused_ordering(961) 00:14:00.650 fused_ordering(962) 00:14:00.650 fused_ordering(963) 00:14:00.650 fused_ordering(964) 00:14:00.650 fused_ordering(965) 00:14:00.650 fused_ordering(966) 00:14:00.650 fused_ordering(967) 00:14:00.650 fused_ordering(968) 00:14:00.650 fused_ordering(969) 00:14:00.650 fused_ordering(970) 00:14:00.650 fused_ordering(971) 00:14:00.650 fused_ordering(972) 00:14:00.650 fused_ordering(973) 00:14:00.650 fused_ordering(974) 00:14:00.650 fused_ordering(975) 00:14:00.650 fused_ordering(976) 00:14:00.650 fused_ordering(977) 00:14:00.650 fused_ordering(978) 00:14:00.650 fused_ordering(979) 00:14:00.650 fused_ordering(980) 00:14:00.650 fused_ordering(981) 00:14:00.650 fused_ordering(982) 00:14:00.650 fused_ordering(983) 00:14:00.650 fused_ordering(984) 00:14:00.650 fused_ordering(985) 00:14:00.650 fused_ordering(986) 00:14:00.650 fused_ordering(987) 00:14:00.650 fused_ordering(988) 00:14:00.650 fused_ordering(989) 00:14:00.650 fused_ordering(990) 00:14:00.650 fused_ordering(991) 00:14:00.650 fused_ordering(992) 00:14:00.650 fused_ordering(993) 00:14:00.650 fused_ordering(994) 00:14:00.650 fused_ordering(995) 00:14:00.650 fused_ordering(996) 00:14:00.650 fused_ordering(997) 00:14:00.650 fused_ordering(998) 00:14:00.650 fused_ordering(999) 00:14:00.650 fused_ordering(1000) 00:14:00.650 fused_ordering(1001) 00:14:00.650 fused_ordering(1002) 00:14:00.650 fused_ordering(1003) 00:14:00.650 fused_ordering(1004) 00:14:00.650 fused_ordering(1005) 00:14:00.650 fused_ordering(1006) 00:14:00.650 fused_ordering(1007) 00:14:00.650 fused_ordering(1008) 00:14:00.650 fused_ordering(1009) 00:14:00.650 fused_ordering(1010) 00:14:00.650 fused_ordering(1011) 00:14:00.650 fused_ordering(1012) 00:14:00.650 fused_ordering(1013) 00:14:00.650 fused_ordering(1014) 00:14:00.650 fused_ordering(1015) 00:14:00.650 fused_ordering(1016) 00:14:00.650 fused_ordering(1017) 00:14:00.650 fused_ordering(1018) 00:14:00.650 fused_ordering(1019) 00:14:00.650 fused_ordering(1020) 00:14:00.650 fused_ordering(1021) 00:14:00.650 fused_ordering(1022) 00:14:00.650 fused_ordering(1023) 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:00.650 rmmod nvme_tcp 00:14:00.650 rmmod nvme_fabrics 00:14:00.650 rmmod nvme_keyring 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2927521 ']' 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2927521 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 2927521 ']' 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 2927521 00:14:00.650 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:14:00.651 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:00.651 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2927521 00:14:00.651 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:00.651 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:00.651 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2927521' 00:14:00.651 killing process with pid 2927521 00:14:00.651 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 2927521 00:14:00.651 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 2927521 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:00.912 04:25:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:03.460 00:14:03.460 real 0m13.279s 00:14:03.460 user 0m6.996s 00:14:03.460 sys 0m6.995s 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:03.460 ************************************ 00:14:03.460 END TEST nvmf_fused_ordering 00:14:03.460 ************************************ 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:03.460 ************************************ 00:14:03.460 START TEST nvmf_ns_masking 00:14:03.460 ************************************ 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:03.460 * Looking for test storage... 00:14:03.460 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:14:03.460 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.461 --rc genhtml_branch_coverage=1 00:14:03.461 --rc genhtml_function_coverage=1 00:14:03.461 --rc genhtml_legend=1 00:14:03.461 --rc geninfo_all_blocks=1 00:14:03.461 --rc geninfo_unexecuted_blocks=1 00:14:03.461 00:14:03.461 ' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.461 --rc genhtml_branch_coverage=1 00:14:03.461 --rc genhtml_function_coverage=1 00:14:03.461 --rc genhtml_legend=1 00:14:03.461 --rc geninfo_all_blocks=1 00:14:03.461 --rc geninfo_unexecuted_blocks=1 00:14:03.461 00:14:03.461 ' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.461 --rc genhtml_branch_coverage=1 00:14:03.461 --rc genhtml_function_coverage=1 00:14:03.461 --rc genhtml_legend=1 00:14:03.461 --rc geninfo_all_blocks=1 00:14:03.461 --rc geninfo_unexecuted_blocks=1 00:14:03.461 00:14:03.461 ' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:03.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:03.461 --rc genhtml_branch_coverage=1 00:14:03.461 --rc genhtml_function_coverage=1 00:14:03.461 --rc genhtml_legend=1 00:14:03.461 --rc geninfo_all_blocks=1 00:14:03.461 --rc geninfo_unexecuted_blocks=1 00:14:03.461 00:14:03.461 ' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:03.461 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=a007688e-7e1d-4576-8b62-7eede463352a 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=144fe5b5-aedb-4d86-b783-de15bc3ab650 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=46c95db3-e227-47fb-aa93-13371d796f4f 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:14:03.461 04:25:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:11.606 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:11.606 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:11.606 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:11.606 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:11.606 04:25:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:11.606 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:11.606 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:11.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:11.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.624 ms 00:14:11.607 00:14:11.607 --- 10.0.0.2 ping statistics --- 00:14:11.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.607 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:11.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:11.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:14:11.607 00:14:11.607 --- 10.0.0.1 ping statistics --- 00:14:11.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:11.607 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2932483 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2932483 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2932483 ']' 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.607 [2024-11-05 04:25:24.186688] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:14:11.607 [2024-11-05 04:25:24.186762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:11.607 [2024-11-05 04:25:24.268686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.607 [2024-11-05 04:25:24.309559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:11.607 [2024-11-05 04:25:24.309601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:11.607 [2024-11-05 04:25:24.309609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:11.607 [2024-11-05 04:25:24.309616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:11.607 [2024-11-05 04:25:24.309622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:11.607 [2024-11-05 04:25:24.310181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:11.607 04:25:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.607 04:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:11.607 04:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:11.607 [2024-11-05 04:25:25.171671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:11.607 04:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:11.607 04:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:11.607 04:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:11.868 Malloc1 00:14:11.869 04:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:12.129 Malloc2 00:14:12.129 04:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:12.129 04:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:12.390 04:25:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:12.651 [2024-11-05 04:25:26.032516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:12.651 04:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:12.651 04:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 46c95db3-e227-47fb-aa93-13371d796f4f -a 10.0.0.2 -s 4420 -i 4 00:14:12.651 04:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:12.651 04:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:12.651 04:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:12.651 04:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:12.651 04:25:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.197 [ 0]:0x1 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c45a6e212121466383455bb8260e851b 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c45a6e212121466383455bb8260e851b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.197 [ 0]:0x1 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c45a6e212121466383455bb8260e851b 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c45a6e212121466383455bb8260e851b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.197 [ 1]:0x2 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73bd5622e19c49df99f6b4539d7f2da8 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73bd5622e19c49df99f6b4539d7f2da8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:15.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.197 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.458 04:25:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:15.718 04:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:15.718 04:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 46c95db3-e227-47fb-aa93-13371d796f4f -a 10.0.0.2 -s 4420 -i 4 00:14:15.978 04:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:15.978 04:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:15.978 04:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:15.979 04:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:14:15.979 04:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:14:15.979 04:25:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:17.891 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.151 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:18.151 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.151 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:18.151 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.151 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.151 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.151 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:18.152 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.152 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.152 [ 0]:0x2 00:14:18.152 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.152 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.152 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73bd5622e19c49df99f6b4539d7f2da8 00:14:18.152 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73bd5622e19c49df99f6b4539d7f2da8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.152 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.412 [ 0]:0x1 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c45a6e212121466383455bb8260e851b 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c45a6e212121466383455bb8260e851b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.412 [ 1]:0x2 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73bd5622e19c49df99f6b4539d7f2da8 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73bd5622e19c49df99f6b4539d7f2da8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.412 04:25:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:18.673 [ 0]:0x2 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73bd5622e19c49df99f6b4539d7f2da8 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73bd5622e19c49df99f6b4539d7f2da8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:18.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.673 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:18.933 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:18.934 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 46c95db3-e227-47fb-aa93-13371d796f4f -a 10.0.0.2 -s 4420 -i 4 00:14:19.194 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:19.194 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:14:19.194 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:19.194 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:19.194 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:19.194 04:25:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:14:21.108 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:21.108 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:21.108 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:21.108 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:21.108 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:21.108 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:14:21.108 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:21.108 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.368 [ 0]:0x1 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c45a6e212121466383455bb8260e851b 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c45a6e212121466383455bb8260e851b != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:21.368 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.369 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.369 [ 1]:0x2 00:14:21.369 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:21.369 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.369 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73bd5622e19c49df99f6b4539d7f2da8 00:14:21.369 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73bd5622e19c49df99f6b4539d7f2da8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.369 04:25:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:21.629 [ 0]:0x2 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73bd5622e19c49df99f6b4539d7f2da8 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73bd5622e19c49df99f6b4539d7f2da8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.629 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.630 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.630 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.630 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.630 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:21.630 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:21.630 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:21.890 [2024-11-05 04:25:35.411385] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:21.890 request: 00:14:21.890 { 00:14:21.890 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:21.890 "nsid": 2, 00:14:21.890 "host": "nqn.2016-06.io.spdk:host1", 00:14:21.890 "method": "nvmf_ns_remove_host", 00:14:21.890 "req_id": 1 00:14:21.890 } 00:14:21.890 Got JSON-RPC error response 00:14:21.890 response: 00:14:21.890 { 00:14:21.890 "code": -32602, 00:14:21.890 "message": "Invalid parameters" 00:14:21.890 } 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:21.890 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:21.891 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:21.891 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:21.891 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.152 [ 0]:0x2 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=73bd5622e19c49df99f6b4539d7f2da8 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 73bd5622e19c49df99f6b4539d7f2da8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2934887 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2934887 /var/tmp/host.sock 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2934887 ']' 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:22.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:22.152 04:25:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:22.413 [2024-11-05 04:25:35.816704] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:14:22.413 [2024-11-05 04:25:35.816763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2934887 ] 00:14:22.413 [2024-11-05 04:25:35.904012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.413 [2024-11-05 04:25:35.939917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.985 04:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:22.985 04:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:14:22.985 04:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.246 04:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:23.508 04:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid a007688e-7e1d-4576-8b62-7eede463352a 00:14:23.508 04:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:23.508 04:25:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A007688E7E1D45768B627EEDE463352A -i 00:14:23.508 04:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 144fe5b5-aedb-4d86-b783-de15bc3ab650 00:14:23.508 04:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:23.508 04:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 144FE5B5AEDB4D86B783DE15BC3AB650 -i 00:14:23.768 04:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:24.029 04:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:24.030 04:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:24.030 04:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:24.290 nvme0n1 00:14:24.290 04:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:24.290 04:25:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:24.863 nvme1n2 00:14:24.863 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:24.863 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:24.863 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:24.863 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:24.863 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:24.863 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:24.863 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:24.863 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:24.863 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:25.124 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ a007688e-7e1d-4576-8b62-7eede463352a == \a\0\0\7\6\8\8\e\-\7\e\1\d\-\4\5\7\6\-\8\b\6\2\-\7\e\e\d\e\4\6\3\3\5\2\a ]] 00:14:25.124 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:25.124 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:25.124 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:25.385 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 144fe5b5-aedb-4d86-b783-de15bc3ab650 == \1\4\4\f\e\5\b\5\-\a\e\d\b\-\4\d\8\6\-\b\7\8\3\-\d\e\1\5\b\c\3\a\b\6\5\0 ]] 00:14:25.385 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:25.385 04:25:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid a007688e-7e1d-4576-8b62-7eede463352a 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A007688E7E1D45768B627EEDE463352A 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A007688E7E1D45768B627EEDE463352A 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:25.646 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g A007688E7E1D45768B627EEDE463352A 00:14:25.646 [2024-11-05 04:25:39.282038] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:25.646 [2024-11-05 04:25:39.282072] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:25.646 [2024-11-05 04:25:39.282082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:25.906 request: 00:14:25.906 { 00:14:25.906 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:25.906 "namespace": { 00:14:25.906 "bdev_name": "invalid", 00:14:25.906 "nsid": 1, 00:14:25.906 "nguid": "A007688E7E1D45768B627EEDE463352A", 00:14:25.906 "no_auto_visible": false 00:14:25.906 }, 00:14:25.906 "method": "nvmf_subsystem_add_ns", 00:14:25.906 "req_id": 1 00:14:25.906 } 00:14:25.906 Got JSON-RPC error response 00:14:25.906 response: 00:14:25.906 { 00:14:25.906 "code": -32602, 00:14:25.906 "message": "Invalid parameters" 00:14:25.906 } 00:14:25.906 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:25.906 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:25.906 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:25.906 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:25.906 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid a007688e-7e1d-4576-8b62-7eede463352a 00:14:25.907 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:25.907 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g A007688E7E1D45768B627EEDE463352A -i 00:14:25.907 04:25:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2934887 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2934887 ']' 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2934887 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2934887 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2934887' 00:14:28.452 killing process with pid 2934887 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2934887 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2934887 00:14:28.452 04:25:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.713 rmmod nvme_tcp 00:14:28.713 rmmod nvme_fabrics 00:14:28.713 rmmod nvme_keyring 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2932483 ']' 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2932483 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2932483 ']' 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2932483 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2932483 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2932483' 00:14:28.713 killing process with pid 2932483 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2932483 00:14:28.713 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2932483 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.974 04:25:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.888 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:30.888 00:14:30.888 real 0m27.875s 00:14:30.888 user 0m31.580s 00:14:30.888 sys 0m7.976s 00:14:30.888 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:30.888 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:30.888 ************************************ 00:14:30.888 END TEST nvmf_ns_masking 00:14:30.888 ************************************ 00:14:30.888 04:25:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:30.888 04:25:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:30.888 04:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:30.888 04:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:30.888 04:25:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:31.150 ************************************ 00:14:31.150 START TEST nvmf_nvme_cli 00:14:31.150 ************************************ 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:31.150 * Looking for test storage... 00:14:31.150 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.150 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:31.150 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.150 --rc genhtml_branch_coverage=1 00:14:31.151 --rc genhtml_function_coverage=1 00:14:31.151 --rc genhtml_legend=1 00:14:31.151 --rc geninfo_all_blocks=1 00:14:31.151 --rc geninfo_unexecuted_blocks=1 00:14:31.151 00:14:31.151 ' 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:31.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.151 --rc genhtml_branch_coverage=1 00:14:31.151 --rc genhtml_function_coverage=1 00:14:31.151 --rc genhtml_legend=1 00:14:31.151 --rc geninfo_all_blocks=1 00:14:31.151 --rc geninfo_unexecuted_blocks=1 00:14:31.151 00:14:31.151 ' 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:31.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.151 --rc genhtml_branch_coverage=1 00:14:31.151 --rc genhtml_function_coverage=1 00:14:31.151 --rc genhtml_legend=1 00:14:31.151 --rc geninfo_all_blocks=1 00:14:31.151 --rc geninfo_unexecuted_blocks=1 00:14:31.151 00:14:31.151 ' 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:31.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.151 --rc genhtml_branch_coverage=1 00:14:31.151 --rc genhtml_function_coverage=1 00:14:31.151 --rc genhtml_legend=1 00:14:31.151 --rc geninfo_all_blocks=1 00:14:31.151 --rc geninfo_unexecuted_blocks=1 00:14:31.151 00:14:31.151 ' 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.151 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.151 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:31.413 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:31.414 04:25:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:38.127 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:38.127 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:38.127 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:38.127 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:38.127 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:38.128 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.389 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.389 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.389 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.389 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:38.389 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.389 04:25:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.389 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.389 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:38.389 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:38.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:14:38.389 00:14:38.389 --- 10.0.0.2 ping statistics --- 00:14:38.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.389 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:14:38.389 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:14:38.650 00:14:38.650 --- 10.0.0.1 ping statistics --- 00:14:38.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.650 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:38.650 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2940384 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2940384 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 2940384 ']' 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:38.651 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:38.651 [2024-11-05 04:25:52.148096] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:14:38.651 [2024-11-05 04:25:52.148164] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.651 [2024-11-05 04:25:52.230630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.651 [2024-11-05 04:25:52.273687] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.651 [2024-11-05 04:25:52.273725] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.651 [2024-11-05 04:25:52.273733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.651 [2024-11-05 04:25:52.273740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.651 [2024-11-05 04:25:52.273751] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.651 [2024-11-05 04:25:52.275570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.651 [2024-11-05 04:25:52.275707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.651 [2024-11-05 04:25:52.275864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.651 [2024-11-05 04:25:52.275864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:39.595 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:39.595 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:14:39.595 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.595 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.595 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.595 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:39.595 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.595 04:25:52 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 [2024-11-05 04:25:53.002619] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 Malloc0 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 Malloc1 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 [2024-11-05 04:25:53.100575] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.595 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:14:39.856 00:14:39.856 Discovery Log Number of Records 2, Generation counter 2 00:14:39.856 =====Discovery Log Entry 0====== 00:14:39.856 trtype: tcp 00:14:39.856 adrfam: ipv4 00:14:39.856 subtype: current discovery subsystem 00:14:39.856 treq: not required 00:14:39.856 portid: 0 00:14:39.856 trsvcid: 4420 00:14:39.856 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:39.856 traddr: 10.0.0.2 00:14:39.856 eflags: explicit discovery connections, duplicate discovery information 00:14:39.856 sectype: none 00:14:39.856 =====Discovery Log Entry 1====== 00:14:39.856 trtype: tcp 00:14:39.856 adrfam: ipv4 00:14:39.856 subtype: nvme subsystem 00:14:39.856 treq: not required 00:14:39.856 portid: 0 00:14:39.856 trsvcid: 4420 00:14:39.856 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:39.856 traddr: 10.0.0.2 00:14:39.856 eflags: none 00:14:39.856 sectype: none 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:39.856 04:25:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:41.767 04:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:41.767 04:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:14:41.767 04:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:41.767 04:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:14:41.767 04:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:14:41.767 04:25:54 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:43.679 /dev/nvme0n2 ]] 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:43.679 04:25:56 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.679 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:43.679 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:14:43.679 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:43.679 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.679 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:43.679 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:43.679 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:14:43.679 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:43.680 rmmod nvme_tcp 00:14:43.680 rmmod nvme_fabrics 00:14:43.680 rmmod nvme_keyring 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2940384 ']' 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2940384 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 2940384 ']' 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 2940384 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:43.680 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2940384 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2940384' 00:14:43.941 killing process with pid 2940384 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 2940384 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 2940384 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:43.941 04:25:57 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:46.488 00:14:46.488 real 0m15.003s 00:14:46.488 user 0m22.903s 00:14:46.488 sys 0m6.233s 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:46.488 ************************************ 00:14:46.488 END TEST nvmf_nvme_cli 00:14:46.488 ************************************ 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:46.488 ************************************ 00:14:46.488 START TEST nvmf_vfio_user 00:14:46.488 ************************************ 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:46.488 * Looking for test storage... 00:14:46.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:46.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.488 --rc genhtml_branch_coverage=1 00:14:46.488 --rc genhtml_function_coverage=1 00:14:46.488 --rc genhtml_legend=1 00:14:46.488 --rc geninfo_all_blocks=1 00:14:46.488 --rc geninfo_unexecuted_blocks=1 00:14:46.488 00:14:46.488 ' 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:46.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.488 --rc genhtml_branch_coverage=1 00:14:46.488 --rc genhtml_function_coverage=1 00:14:46.488 --rc genhtml_legend=1 00:14:46.488 --rc geninfo_all_blocks=1 00:14:46.488 --rc geninfo_unexecuted_blocks=1 00:14:46.488 00:14:46.488 ' 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:46.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.488 --rc genhtml_branch_coverage=1 00:14:46.488 --rc genhtml_function_coverage=1 00:14:46.488 --rc genhtml_legend=1 00:14:46.488 --rc geninfo_all_blocks=1 00:14:46.488 --rc geninfo_unexecuted_blocks=1 00:14:46.488 00:14:46.488 ' 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:46.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.488 --rc genhtml_branch_coverage=1 00:14:46.488 --rc genhtml_function_coverage=1 00:14:46.488 --rc genhtml_legend=1 00:14:46.488 --rc geninfo_all_blocks=1 00:14:46.488 --rc geninfo_unexecuted_blocks=1 00:14:46.488 00:14:46.488 ' 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.488 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:46.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2941970 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2941970' 00:14:46.489 Process pid: 2941970 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2941970 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2941970 ']' 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.489 04:25:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:46.489 [2024-11-05 04:25:59.918305] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:14:46.489 [2024-11-05 04:25:59.918362] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.489 [2024-11-05 04:25:59.990727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.489 [2024-11-05 04:26:00.028134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.489 [2024-11-05 04:26:00.028170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.489 [2024-11-05 04:26:00.028179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.489 [2024-11-05 04:26:00.028185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.489 [2024-11-05 04:26:00.028192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.489 [2024-11-05 04:26:00.029712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.489 [2024-11-05 04:26:00.029875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.489 [2024-11-05 04:26:00.029758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.489 [2024-11-05 04:26:00.030080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.431 04:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:47.431 04:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:14:47.431 04:26:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:48.372 04:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:48.372 04:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:48.372 04:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:48.372 04:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:48.372 04:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:48.372 04:26:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:48.633 Malloc1 00:14:48.633 04:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:48.894 04:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:48.894 04:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:49.154 04:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.154 04:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:49.154 04:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:49.414 Malloc2 00:14:49.414 04:26:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:49.414 04:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:49.675 04:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:49.938 04:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:49.938 04:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:49.938 04:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:49.938 04:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:49.938 04:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:49.938 04:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:49.938 [2024-11-05 04:26:03.392607] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:14:49.938 [2024-11-05 04:26:03.392658] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2942681 ] 00:14:49.938 [2024-11-05 04:26:03.447870] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:49.938 [2024-11-05 04:26:03.456124] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.938 [2024-11-05 04:26:03.456145] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1835c39000 00:14:49.938 [2024-11-05 04:26:03.457132] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.938 [2024-11-05 04:26:03.458133] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.938 [2024-11-05 04:26:03.459135] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.938 [2024-11-05 04:26:03.460144] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.938 [2024-11-05 04:26:03.461142] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.938 [2024-11-05 04:26:03.462150] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.938 [2024-11-05 04:26:03.463158] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:49.938 [2024-11-05 04:26:03.464157] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:49.938 [2024-11-05 04:26:03.465171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:49.939 [2024-11-05 04:26:03.465185] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1835c2e000 00:14:49.939 [2024-11-05 04:26:03.466513] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:49.939 [2024-11-05 04:26:03.487907] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:49.939 [2024-11-05 04:26:03.487934] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:49.939 [2024-11-05 04:26:03.490325] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:49.939 [2024-11-05 04:26:03.490371] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:49.939 [2024-11-05 04:26:03.490457] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:49.939 [2024-11-05 04:26:03.490478] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:49.939 [2024-11-05 04:26:03.490484] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:49.939 [2024-11-05 04:26:03.491322] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:49.939 [2024-11-05 04:26:03.491332] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:49.939 [2024-11-05 04:26:03.491344] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:49.939 [2024-11-05 04:26:03.492330] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:49.939 [2024-11-05 04:26:03.492340] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:49.939 [2024-11-05 04:26:03.492348] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:49.939 [2024-11-05 04:26:03.493336] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:49.939 [2024-11-05 04:26:03.493345] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:49.939 [2024-11-05 04:26:03.494339] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:49.939 [2024-11-05 04:26:03.494348] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:49.939 [2024-11-05 04:26:03.494354] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:49.939 [2024-11-05 04:26:03.494361] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:49.939 [2024-11-05 04:26:03.494467] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:49.939 [2024-11-05 04:26:03.494472] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:49.939 [2024-11-05 04:26:03.494478] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:49.939 [2024-11-05 04:26:03.495357] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:49.939 [2024-11-05 04:26:03.496355] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:49.939 [2024-11-05 04:26:03.497366] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:49.939 [2024-11-05 04:26:03.498364] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.939 [2024-11-05 04:26:03.498430] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:49.939 [2024-11-05 04:26:03.499371] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:49.939 [2024-11-05 04:26:03.499379] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:49.939 [2024-11-05 04:26:03.499385] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499406] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:49.939 [2024-11-05 04:26:03.499418] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499435] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.939 [2024-11-05 04:26:03.499442] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.939 [2024-11-05 04:26:03.499446] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.939 [2024-11-05 04:26:03.499460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.939 [2024-11-05 04:26:03.499506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:49.939 [2024-11-05 04:26:03.499516] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:49.939 [2024-11-05 04:26:03.499522] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:49.939 [2024-11-05 04:26:03.499526] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:49.939 [2024-11-05 04:26:03.499531] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:49.939 [2024-11-05 04:26:03.499536] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:49.939 [2024-11-05 04:26:03.499541] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:49.939 [2024-11-05 04:26:03.499546] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499554] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499565] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:49.939 [2024-11-05 04:26:03.499575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:49.939 [2024-11-05 04:26:03.499588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.939 [2024-11-05 04:26:03.499598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.939 [2024-11-05 04:26:03.499606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.939 [2024-11-05 04:26:03.499615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:49.939 [2024-11-05 04:26:03.499620] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499627] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:49.939 [2024-11-05 04:26:03.499646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:49.939 [2024-11-05 04:26:03.499654] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:49.939 [2024-11-05 04:26:03.499659] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499666] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499676] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499686] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.939 [2024-11-05 04:26:03.499696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:49.939 [2024-11-05 04:26:03.499771] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499780] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499788] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:49.939 [2024-11-05 04:26:03.499793] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:49.939 [2024-11-05 04:26:03.499796] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.939 [2024-11-05 04:26:03.499803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:49.939 [2024-11-05 04:26:03.499815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:49.939 [2024-11-05 04:26:03.499825] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:49.939 [2024-11-05 04:26:03.499834] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499842] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:49.939 [2024-11-05 04:26:03.499850] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.939 [2024-11-05 04:26:03.499854] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.939 [2024-11-05 04:26:03.499858] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.939 [2024-11-05 04:26:03.499864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.939 [2024-11-05 04:26:03.499878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:49.939 [2024-11-05 04:26:03.499891] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:49.940 [2024-11-05 04:26:03.499900] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:49.940 [2024-11-05 04:26:03.499907] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:49.940 [2024-11-05 04:26:03.499911] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.940 [2024-11-05 04:26:03.499915] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.940 [2024-11-05 04:26:03.499921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.940 [2024-11-05 04:26:03.499935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:49.940 [2024-11-05 04:26:03.499944] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:49.940 [2024-11-05 04:26:03.499953] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:49.940 [2024-11-05 04:26:03.499961] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:49.940 [2024-11-05 04:26:03.499967] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:49.940 [2024-11-05 04:26:03.499972] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:49.940 [2024-11-05 04:26:03.499978] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:49.940 [2024-11-05 04:26:03.499983] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:49.940 [2024-11-05 04:26:03.499988] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:49.940 [2024-11-05 04:26:03.499994] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:49.940 [2024-11-05 04:26:03.500013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:49.940 [2024-11-05 04:26:03.500023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:49.940 [2024-11-05 04:26:03.500036] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:49.940 [2024-11-05 04:26:03.500044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:49.940 [2024-11-05 04:26:03.500055] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:49.940 [2024-11-05 04:26:03.500063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:49.940 [2024-11-05 04:26:03.500074] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:49.940 [2024-11-05 04:26:03.500081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:49.940 [2024-11-05 04:26:03.500095] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:49.940 [2024-11-05 04:26:03.500100] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:49.940 [2024-11-05 04:26:03.500103] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:49.940 [2024-11-05 04:26:03.500107] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:49.940 [2024-11-05 04:26:03.500111] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:49.940 [2024-11-05 04:26:03.500117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:49.940 [2024-11-05 04:26:03.500125] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:49.940 [2024-11-05 04:26:03.500129] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:49.940 [2024-11-05 04:26:03.500133] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.940 [2024-11-05 04:26:03.500139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:49.940 [2024-11-05 04:26:03.500147] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:49.940 [2024-11-05 04:26:03.500152] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:49.940 [2024-11-05 04:26:03.500156] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.940 [2024-11-05 04:26:03.500162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:49.940 [2024-11-05 04:26:03.500172] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:49.940 [2024-11-05 04:26:03.500177] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:49.940 [2024-11-05 04:26:03.500181] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:49.940 [2024-11-05 04:26:03.500186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:49.940 [2024-11-05 04:26:03.500194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:49.940 [2024-11-05 04:26:03.500206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:49.940 [2024-11-05 04:26:03.500217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:49.940 [2024-11-05 04:26:03.500225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:49.940 ===================================================== 00:14:49.940 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.940 ===================================================== 00:14:49.940 Controller Capabilities/Features 00:14:49.940 ================================ 00:14:49.940 Vendor ID: 4e58 00:14:49.940 Subsystem Vendor ID: 4e58 00:14:49.940 Serial Number: SPDK1 00:14:49.940 Model Number: SPDK bdev Controller 00:14:49.940 Firmware Version: 25.01 00:14:49.940 Recommended Arb Burst: 6 00:14:49.940 IEEE OUI Identifier: 8d 6b 50 00:14:49.940 Multi-path I/O 00:14:49.940 May have multiple subsystem ports: Yes 00:14:49.940 May have multiple controllers: Yes 00:14:49.940 Associated with SR-IOV VF: No 00:14:49.940 Max Data Transfer Size: 131072 00:14:49.940 Max Number of Namespaces: 32 00:14:49.940 Max Number of I/O Queues: 127 00:14:49.940 NVMe Specification Version (VS): 1.3 00:14:49.940 NVMe Specification Version (Identify): 1.3 00:14:49.940 Maximum Queue Entries: 256 00:14:49.940 Contiguous Queues Required: Yes 00:14:49.940 Arbitration Mechanisms Supported 00:14:49.940 Weighted Round Robin: Not Supported 00:14:49.940 Vendor Specific: Not Supported 00:14:49.940 Reset Timeout: 15000 ms 00:14:49.940 Doorbell Stride: 4 bytes 00:14:49.940 NVM Subsystem Reset: Not Supported 00:14:49.940 Command Sets Supported 00:14:49.940 NVM Command Set: Supported 00:14:49.940 Boot Partition: Not Supported 00:14:49.940 Memory Page Size Minimum: 4096 bytes 00:14:49.940 Memory Page Size Maximum: 4096 bytes 00:14:49.940 Persistent Memory Region: Not Supported 00:14:49.940 Optional Asynchronous Events Supported 00:14:49.940 Namespace Attribute Notices: Supported 00:14:49.940 Firmware Activation Notices: Not Supported 00:14:49.940 ANA Change Notices: Not Supported 00:14:49.940 PLE Aggregate Log Change Notices: Not Supported 00:14:49.940 LBA Status Info Alert Notices: Not Supported 00:14:49.940 EGE Aggregate Log Change Notices: Not Supported 00:14:49.940 Normal NVM Subsystem Shutdown event: Not Supported 00:14:49.940 Zone Descriptor Change Notices: Not Supported 00:14:49.940 Discovery Log Change Notices: Not Supported 00:14:49.940 Controller Attributes 00:14:49.940 128-bit Host Identifier: Supported 00:14:49.940 Non-Operational Permissive Mode: Not Supported 00:14:49.940 NVM Sets: Not Supported 00:14:49.940 Read Recovery Levels: Not Supported 00:14:49.940 Endurance Groups: Not Supported 00:14:49.940 Predictable Latency Mode: Not Supported 00:14:49.940 Traffic Based Keep ALive: Not Supported 00:14:49.940 Namespace Granularity: Not Supported 00:14:49.940 SQ Associations: Not Supported 00:14:49.940 UUID List: Not Supported 00:14:49.940 Multi-Domain Subsystem: Not Supported 00:14:49.940 Fixed Capacity Management: Not Supported 00:14:49.940 Variable Capacity Management: Not Supported 00:14:49.940 Delete Endurance Group: Not Supported 00:14:49.940 Delete NVM Set: Not Supported 00:14:49.940 Extended LBA Formats Supported: Not Supported 00:14:49.940 Flexible Data Placement Supported: Not Supported 00:14:49.940 00:14:49.940 Controller Memory Buffer Support 00:14:49.940 ================================ 00:14:49.940 Supported: No 00:14:49.940 00:14:49.940 Persistent Memory Region Support 00:14:49.940 ================================ 00:14:49.940 Supported: No 00:14:49.940 00:14:49.940 Admin Command Set Attributes 00:14:49.940 ============================ 00:14:49.940 Security Send/Receive: Not Supported 00:14:49.940 Format NVM: Not Supported 00:14:49.940 Firmware Activate/Download: Not Supported 00:14:49.940 Namespace Management: Not Supported 00:14:49.940 Device Self-Test: Not Supported 00:14:49.940 Directives: Not Supported 00:14:49.940 NVMe-MI: Not Supported 00:14:49.940 Virtualization Management: Not Supported 00:14:49.940 Doorbell Buffer Config: Not Supported 00:14:49.940 Get LBA Status Capability: Not Supported 00:14:49.940 Command & Feature Lockdown Capability: Not Supported 00:14:49.940 Abort Command Limit: 4 00:14:49.940 Async Event Request Limit: 4 00:14:49.940 Number of Firmware Slots: N/A 00:14:49.940 Firmware Slot 1 Read-Only: N/A 00:14:49.940 Firmware Activation Without Reset: N/A 00:14:49.940 Multiple Update Detection Support: N/A 00:14:49.941 Firmware Update Granularity: No Information Provided 00:14:49.941 Per-Namespace SMART Log: No 00:14:49.941 Asymmetric Namespace Access Log Page: Not Supported 00:14:49.941 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:49.941 Command Effects Log Page: Supported 00:14:49.941 Get Log Page Extended Data: Supported 00:14:49.941 Telemetry Log Pages: Not Supported 00:14:49.941 Persistent Event Log Pages: Not Supported 00:14:49.941 Supported Log Pages Log Page: May Support 00:14:49.941 Commands Supported & Effects Log Page: Not Supported 00:14:49.941 Feature Identifiers & Effects Log Page:May Support 00:14:49.941 NVMe-MI Commands & Effects Log Page: May Support 00:14:49.941 Data Area 4 for Telemetry Log: Not Supported 00:14:49.941 Error Log Page Entries Supported: 128 00:14:49.941 Keep Alive: Supported 00:14:49.941 Keep Alive Granularity: 10000 ms 00:14:49.941 00:14:49.941 NVM Command Set Attributes 00:14:49.941 ========================== 00:14:49.941 Submission Queue Entry Size 00:14:49.941 Max: 64 00:14:49.941 Min: 64 00:14:49.941 Completion Queue Entry Size 00:14:49.941 Max: 16 00:14:49.941 Min: 16 00:14:49.941 Number of Namespaces: 32 00:14:49.941 Compare Command: Supported 00:14:49.941 Write Uncorrectable Command: Not Supported 00:14:49.941 Dataset Management Command: Supported 00:14:49.941 Write Zeroes Command: Supported 00:14:49.941 Set Features Save Field: Not Supported 00:14:49.941 Reservations: Not Supported 00:14:49.941 Timestamp: Not Supported 00:14:49.941 Copy: Supported 00:14:49.941 Volatile Write Cache: Present 00:14:49.941 Atomic Write Unit (Normal): 1 00:14:49.941 Atomic Write Unit (PFail): 1 00:14:49.941 Atomic Compare & Write Unit: 1 00:14:49.941 Fused Compare & Write: Supported 00:14:49.941 Scatter-Gather List 00:14:49.941 SGL Command Set: Supported (Dword aligned) 00:14:49.941 SGL Keyed: Not Supported 00:14:49.941 SGL Bit Bucket Descriptor: Not Supported 00:14:49.941 SGL Metadata Pointer: Not Supported 00:14:49.941 Oversized SGL: Not Supported 00:14:49.941 SGL Metadata Address: Not Supported 00:14:49.941 SGL Offset: Not Supported 00:14:49.941 Transport SGL Data Block: Not Supported 00:14:49.941 Replay Protected Memory Block: Not Supported 00:14:49.941 00:14:49.941 Firmware Slot Information 00:14:49.941 ========================= 00:14:49.941 Active slot: 1 00:14:49.941 Slot 1 Firmware Revision: 25.01 00:14:49.941 00:14:49.941 00:14:49.941 Commands Supported and Effects 00:14:49.941 ============================== 00:14:49.941 Admin Commands 00:14:49.941 -------------- 00:14:49.941 Get Log Page (02h): Supported 00:14:49.941 Identify (06h): Supported 00:14:49.941 Abort (08h): Supported 00:14:49.941 Set Features (09h): Supported 00:14:49.941 Get Features (0Ah): Supported 00:14:49.941 Asynchronous Event Request (0Ch): Supported 00:14:49.941 Keep Alive (18h): Supported 00:14:49.941 I/O Commands 00:14:49.941 ------------ 00:14:49.941 Flush (00h): Supported LBA-Change 00:14:49.941 Write (01h): Supported LBA-Change 00:14:49.941 Read (02h): Supported 00:14:49.941 Compare (05h): Supported 00:14:49.941 Write Zeroes (08h): Supported LBA-Change 00:14:49.941 Dataset Management (09h): Supported LBA-Change 00:14:49.941 Copy (19h): Supported LBA-Change 00:14:49.941 00:14:49.941 Error Log 00:14:49.941 ========= 00:14:49.941 00:14:49.941 Arbitration 00:14:49.941 =========== 00:14:49.941 Arbitration Burst: 1 00:14:49.941 00:14:49.941 Power Management 00:14:49.941 ================ 00:14:49.941 Number of Power States: 1 00:14:49.941 Current Power State: Power State #0 00:14:49.941 Power State #0: 00:14:49.941 Max Power: 0.00 W 00:14:49.941 Non-Operational State: Operational 00:14:49.941 Entry Latency: Not Reported 00:14:49.941 Exit Latency: Not Reported 00:14:49.941 Relative Read Throughput: 0 00:14:49.941 Relative Read Latency: 0 00:14:49.941 Relative Write Throughput: 0 00:14:49.941 Relative Write Latency: 0 00:14:49.941 Idle Power: Not Reported 00:14:49.941 Active Power: Not Reported 00:14:49.941 Non-Operational Permissive Mode: Not Supported 00:14:49.941 00:14:49.941 Health Information 00:14:49.941 ================== 00:14:49.941 Critical Warnings: 00:14:49.941 Available Spare Space: OK 00:14:49.941 Temperature: OK 00:14:49.941 Device Reliability: OK 00:14:49.941 Read Only: No 00:14:49.941 Volatile Memory Backup: OK 00:14:49.941 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:49.941 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:49.941 Available Spare: 0% 00:14:49.941 Available Sp[2024-11-05 04:26:03.500333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:49.941 [2024-11-05 04:26:03.500342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:49.941 [2024-11-05 04:26:03.500373] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:49.941 [2024-11-05 04:26:03.500383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.941 [2024-11-05 04:26:03.500390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.941 [2024-11-05 04:26:03.500396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.941 [2024-11-05 04:26:03.500403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:49.941 [2024-11-05 04:26:03.502753] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:49.941 [2024-11-05 04:26:03.502765] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:49.941 [2024-11-05 04:26:03.503391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.941 [2024-11-05 04:26:03.503433] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:49.941 [2024-11-05 04:26:03.503439] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:49.941 [2024-11-05 04:26:03.504398] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:49.941 [2024-11-05 04:26:03.504409] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:49.941 [2024-11-05 04:26:03.504468] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:49.941 [2024-11-05 04:26:03.507755] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:49.941 are Threshold: 0% 00:14:49.941 Life Percentage Used: 0% 00:14:49.941 Data Units Read: 0 00:14:49.941 Data Units Written: 0 00:14:49.941 Host Read Commands: 0 00:14:49.941 Host Write Commands: 0 00:14:49.941 Controller Busy Time: 0 minutes 00:14:49.941 Power Cycles: 0 00:14:49.941 Power On Hours: 0 hours 00:14:49.941 Unsafe Shutdowns: 0 00:14:49.941 Unrecoverable Media Errors: 0 00:14:49.941 Lifetime Error Log Entries: 0 00:14:49.941 Warning Temperature Time: 0 minutes 00:14:49.941 Critical Temperature Time: 0 minutes 00:14:49.941 00:14:49.941 Number of Queues 00:14:49.941 ================ 00:14:49.941 Number of I/O Submission Queues: 127 00:14:49.941 Number of I/O Completion Queues: 127 00:14:49.941 00:14:49.941 Active Namespaces 00:14:49.941 ================= 00:14:49.941 Namespace ID:1 00:14:49.941 Error Recovery Timeout: Unlimited 00:14:49.941 Command Set Identifier: NVM (00h) 00:14:49.941 Deallocate: Supported 00:14:49.941 Deallocated/Unwritten Error: Not Supported 00:14:49.941 Deallocated Read Value: Unknown 00:14:49.941 Deallocate in Write Zeroes: Not Supported 00:14:49.941 Deallocated Guard Field: 0xFFFF 00:14:49.941 Flush: Supported 00:14:49.941 Reservation: Supported 00:14:49.941 Namespace Sharing Capabilities: Multiple Controllers 00:14:49.941 Size (in LBAs): 131072 (0GiB) 00:14:49.941 Capacity (in LBAs): 131072 (0GiB) 00:14:49.941 Utilization (in LBAs): 131072 (0GiB) 00:14:49.941 NGUID: C459E5C42DB64110BA8A6DDD703050AA 00:14:49.941 UUID: c459e5c4-2db6-4110-ba8a-6ddd703050aa 00:14:49.941 Thin Provisioning: Not Supported 00:14:49.941 Per-NS Atomic Units: Yes 00:14:49.941 Atomic Boundary Size (Normal): 0 00:14:49.941 Atomic Boundary Size (PFail): 0 00:14:49.941 Atomic Boundary Offset: 0 00:14:49.941 Maximum Single Source Range Length: 65535 00:14:49.941 Maximum Copy Length: 65535 00:14:49.941 Maximum Source Range Count: 1 00:14:49.941 NGUID/EUI64 Never Reused: No 00:14:49.941 Namespace Write Protected: No 00:14:49.941 Number of LBA Formats: 1 00:14:49.941 Current LBA Format: LBA Format #00 00:14:49.941 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:49.941 00:14:49.941 04:26:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:50.202 [2024-11-05 04:26:03.704422] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.651 Initializing NVMe Controllers 00:14:55.651 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.651 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:55.651 Initialization complete. Launching workers. 00:14:55.651 ======================================================== 00:14:55.651 Latency(us) 00:14:55.651 Device Information : IOPS MiB/s Average min max 00:14:55.651 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 40074.56 156.54 3193.93 850.75 7105.43 00:14:55.651 ======================================================== 00:14:55.651 Total : 40074.56 156.54 3193.93 850.75 7105.43 00:14:55.651 00:14:55.651 [2024-11-05 04:26:08.722808] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.651 04:26:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:55.651 [2024-11-05 04:26:08.915697] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.938 Initializing NVMe Controllers 00:15:00.938 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:00.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:00.938 Initialization complete. Launching workers. 00:15:00.938 ======================================================== 00:15:00.938 Latency(us) 00:15:00.938 Device Information : IOPS MiB/s Average min max 00:15:00.938 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16036.72 62.64 7986.42 6571.59 14965.03 00:15:00.938 ======================================================== 00:15:00.938 Total : 16036.72 62.64 7986.42 6571.59 14965.03 00:15:00.938 00:15:00.938 [2024-11-05 04:26:13.955091] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.938 04:26:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:00.938 [2024-11-05 04:26:14.156983] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.228 [2024-11-05 04:26:19.242014] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.228 Initializing NVMe Controllers 00:15:06.228 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.228 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:06.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:06.228 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:06.228 Initialization complete. Launching workers. 00:15:06.228 Starting thread on core 2 00:15:06.228 Starting thread on core 3 00:15:06.228 Starting thread on core 1 00:15:06.228 04:26:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:06.228 [2024-11-05 04:26:19.522227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.434 [2024-11-05 04:26:23.271894] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.434 Initializing NVMe Controllers 00:15:10.434 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.434 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.434 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:10.434 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:10.434 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:10.434 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:10.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:10.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:10.434 Initialization complete. Launching workers. 00:15:10.434 Starting thread on core 1 with urgent priority queue 00:15:10.434 Starting thread on core 2 with urgent priority queue 00:15:10.434 Starting thread on core 3 with urgent priority queue 00:15:10.434 Starting thread on core 0 with urgent priority queue 00:15:10.434 SPDK bdev Controller (SPDK1 ) core 0: 320.00 IO/s 312.50 secs/100000 ios 00:15:10.434 SPDK bdev Controller (SPDK1 ) core 1: 1080.67 IO/s 92.54 secs/100000 ios 00:15:10.434 SPDK bdev Controller (SPDK1 ) core 2: 558.33 IO/s 179.10 secs/100000 ios 00:15:10.434 SPDK bdev Controller (SPDK1 ) core 3: 717.33 IO/s 139.41 secs/100000 ios 00:15:10.434 ======================================================== 00:15:10.434 00:15:10.434 04:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:10.434 [2024-11-05 04:26:23.558178] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:10.434 Initializing NVMe Controllers 00:15:10.434 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.434 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:10.434 Namespace ID: 1 size: 0GB 00:15:10.434 Initialization complete. 00:15:10.434 INFO: using host memory buffer for IO 00:15:10.434 Hello world! 00:15:10.434 [2024-11-05 04:26:23.592369] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:10.434 04:26:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:10.434 [2024-11-05 04:26:23.874180] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.379 Initializing NVMe Controllers 00:15:11.379 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.379 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.379 Initialization complete. Launching workers. 00:15:11.379 submit (in ns) avg, min, max = 6808.9, 3900.0, 4000105.8 00:15:11.379 complete (in ns) avg, min, max = 20297.2, 2380.8, 6990470.0 00:15:11.379 00:15:11.379 Submit histogram 00:15:11.379 ================ 00:15:11.379 Range in us Cumulative Count 00:15:11.379 3.893 - 3.920: 1.4437% ( 272) 00:15:11.379 3.920 - 3.947: 8.0892% ( 1252) 00:15:11.379 3.947 - 3.973: 18.1582% ( 1897) 00:15:11.379 3.973 - 4.000: 29.5170% ( 2140) 00:15:11.379 4.000 - 4.027: 40.8068% ( 2127) 00:15:11.379 4.027 - 4.053: 52.8503% ( 2269) 00:15:11.379 4.053 - 4.080: 69.4002% ( 3118) 00:15:11.379 4.080 - 4.107: 84.1932% ( 2787) 00:15:11.379 4.107 - 4.133: 93.5297% ( 1759) 00:15:11.379 4.133 - 4.160: 97.7282% ( 791) 00:15:11.379 4.160 - 4.187: 99.1242% ( 263) 00:15:11.379 4.187 - 4.213: 99.4427% ( 60) 00:15:11.379 4.213 - 4.240: 99.4798% ( 7) 00:15:11.379 4.240 - 4.267: 99.4904% ( 2) 00:15:11.379 4.347 - 4.373: 99.4958% ( 1) 00:15:11.379 4.373 - 4.400: 99.5011% ( 1) 00:15:11.379 4.800 - 4.827: 99.5064% ( 1) 00:15:11.379 4.987 - 5.013: 99.5117% ( 1) 00:15:11.379 5.147 - 5.173: 99.5170% ( 1) 00:15:11.379 5.413 - 5.440: 99.5223% ( 1) 00:15:11.379 5.467 - 5.493: 99.5276% ( 1) 00:15:11.379 5.520 - 5.547: 99.5329% ( 1) 00:15:11.379 5.680 - 5.707: 99.5382% ( 1) 00:15:11.379 5.733 - 5.760: 99.5435% ( 1) 00:15:11.379 5.787 - 5.813: 99.5488% ( 1) 00:15:11.379 5.813 - 5.840: 99.5594% ( 2) 00:15:11.379 5.920 - 5.947: 99.5701% ( 2) 00:15:11.379 5.947 - 5.973: 99.5754% ( 1) 00:15:11.380 6.107 - 6.133: 99.5860% ( 2) 00:15:11.380 6.133 - 6.160: 99.5913% ( 1) 00:15:11.380 6.160 - 6.187: 99.5966% ( 1) 00:15:11.380 6.240 - 6.267: 99.6019% ( 1) 00:15:11.380 6.267 - 6.293: 99.6072% ( 1) 00:15:11.380 6.293 - 6.320: 99.6125% ( 1) 00:15:11.380 6.320 - 6.347: 99.6338% ( 4) 00:15:11.380 6.373 - 6.400: 99.6444% ( 2) 00:15:11.380 6.400 - 6.427: 99.6497% ( 1) 00:15:11.380 6.507 - 6.533: 99.6550% ( 1) 00:15:11.380 6.560 - 6.587: 99.6709% ( 3) 00:15:11.380 6.587 - 6.613: 99.6762% ( 1) 00:15:11.380 6.613 - 6.640: 99.6815% ( 1) 00:15:11.380 6.640 - 6.667: 99.6868% ( 1) 00:15:11.380 6.720 - 6.747: 99.6921% ( 1) 00:15:11.380 6.747 - 6.773: 99.7028% ( 2) 00:15:11.380 6.800 - 6.827: 99.7134% ( 2) 00:15:11.380 6.827 - 6.880: 99.7187% ( 1) 00:15:11.380 6.880 - 6.933: 99.7240% ( 1) 00:15:11.380 6.933 - 6.987: 99.7346% ( 2) 00:15:11.380 6.987 - 7.040: 99.7558% ( 4) 00:15:11.380 7.040 - 7.093: 99.7665% ( 2) 00:15:11.380 7.147 - 7.200: 99.7824% ( 3) 00:15:11.380 7.253 - 7.307: 99.7983% ( 3) 00:15:11.380 7.307 - 7.360: 99.8142% ( 3) 00:15:11.380 7.360 - 7.413: 99.8195% ( 1) 00:15:11.380 7.413 - 7.467: 99.8248% ( 1) 00:15:11.380 7.467 - 7.520: 99.8514% ( 5) 00:15:11.380 7.520 - 7.573: 99.8567% ( 1) 00:15:11.380 7.573 - 7.627: 99.8620% ( 1) 00:15:11.380 7.627 - 7.680: 99.8779% ( 3) 00:15:11.380 7.680 - 7.733: 99.8832% ( 1) 00:15:11.380 7.840 - 7.893: 99.8885% ( 1) 00:15:11.380 7.893 - 7.947: 99.9045% ( 3) 00:15:11.380 8.000 - 8.053: 99.9098% ( 1) 00:15:11.380 8.213 - 8.267: 99.9151% ( 1) 00:15:11.380 8.480 - 8.533: 99.9204% ( 1) 00:15:11.380 9.280 - 9.333: 99.9257% ( 1) 00:15:11.380 14.400 - 14.507: 99.9310% ( 1) 00:15:11.380 3986.773 - 4014.080: 100.0000% ( 13) 00:15:11.380 00:15:11.380 Complete histogram 00:15:11.380 ================== 00:15:11.380 Range in us Cumulative Count 00:15:11.380 2.373 - 2.387: 0.0053% ( 1) 00:15:11.380 2.387 - 2.400: 0.0159% ( 2) 00:15:11.380 2.400 - 2.413: 0.5732% ( 105) 00:15:11.380 2.413 - 2.427: 0.6369% ( 12) 00:15:11.380 2.427 - 2.440: 0.8599% ( 42) 00:15:11.380 2.440 - 2.453: 45.3928% ( 8390) 00:15:11.380 2.453 - 2.467: 51.0616% ( 1068) 00:15:11.380 2.467 - 2.480: 72.1975% ( 3982) 00:15:11.380 2.480 - 2.493: 78.9172% ( 1266) 00:15:11.380 2.493 - [2024-11-05 04:26:24.896737] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.380 2.507: 80.7962% ( 354) 00:15:11.380 2.507 - 2.520: 84.7983% ( 754) 00:15:11.380 2.520 - 2.533: 91.2527% ( 1216) 00:15:11.380 2.533 - 2.547: 95.1645% ( 737) 00:15:11.380 2.547 - 2.560: 97.6592% ( 470) 00:15:11.380 2.560 - 2.573: 98.7845% ( 212) 00:15:11.380 2.573 - 2.587: 99.2463% ( 87) 00:15:11.380 2.587 - 2.600: 99.3100% ( 12) 00:15:11.380 2.600 - 2.613: 99.3312% ( 4) 00:15:11.380 2.613 - 2.627: 99.3471% ( 3) 00:15:11.380 2.627 - 2.640: 99.3524% ( 1) 00:15:11.380 4.133 - 4.160: 99.3577% ( 1) 00:15:11.380 4.267 - 4.293: 99.3631% ( 1) 00:15:11.380 4.480 - 4.507: 99.3684% ( 1) 00:15:11.380 4.587 - 4.613: 99.3737% ( 1) 00:15:11.380 4.640 - 4.667: 99.3790% ( 1) 00:15:11.380 4.720 - 4.747: 99.3843% ( 1) 00:15:11.380 4.747 - 4.773: 99.3896% ( 1) 00:15:11.380 4.800 - 4.827: 99.4002% ( 2) 00:15:11.380 4.853 - 4.880: 99.4055% ( 1) 00:15:11.380 4.907 - 4.933: 99.4108% ( 1) 00:15:11.380 4.933 - 4.960: 99.4161% ( 1) 00:15:11.380 5.040 - 5.067: 99.4214% ( 1) 00:15:11.380 5.173 - 5.200: 99.4321% ( 2) 00:15:11.380 5.253 - 5.280: 99.4374% ( 1) 00:15:11.380 5.333 - 5.360: 99.4427% ( 1) 00:15:11.380 5.413 - 5.440: 99.4480% ( 1) 00:15:11.380 5.440 - 5.467: 99.4533% ( 1) 00:15:11.380 5.493 - 5.520: 99.4586% ( 1) 00:15:11.380 5.547 - 5.573: 99.4639% ( 1) 00:15:11.380 5.653 - 5.680: 99.4692% ( 1) 00:15:11.380 5.787 - 5.813: 99.4745% ( 1) 00:15:11.380 5.813 - 5.840: 99.4851% ( 2) 00:15:11.380 5.947 - 5.973: 99.4904% ( 1) 00:15:11.380 5.973 - 6.000: 99.4958% ( 1) 00:15:11.380 6.027 - 6.053: 99.5011% ( 1) 00:15:11.380 6.080 - 6.107: 99.5064% ( 1) 00:15:11.380 6.160 - 6.187: 99.5117% ( 1) 00:15:11.380 6.320 - 6.347: 99.5170% ( 1) 00:15:11.380 6.533 - 6.560: 99.5276% ( 2) 00:15:11.380 7.200 - 7.253: 99.5329% ( 1) 00:15:11.380 9.067 - 9.120: 99.5382% ( 1) 00:15:11.380 9.707 - 9.760: 99.5435% ( 1) 00:15:11.380 13.173 - 13.227: 99.5488% ( 1) 00:15:11.380 45.013 - 45.227: 99.5541% ( 1) 00:15:11.380 996.693 - 1003.520: 99.5594% ( 1) 00:15:11.380 2075.307 - 2088.960: 99.5648% ( 1) 00:15:11.380 3986.773 - 4014.080: 99.9894% ( 80) 00:15:11.380 5980.160 - 6007.467: 99.9947% ( 1) 00:15:11.380 6963.200 - 6990.507: 100.0000% ( 1) 00:15:11.380 00:15:11.380 04:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:11.380 04:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:11.380 04:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:11.380 04:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:11.380 04:26:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.641 [ 00:15:11.641 { 00:15:11.641 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.641 "subtype": "Discovery", 00:15:11.642 "listen_addresses": [], 00:15:11.642 "allow_any_host": true, 00:15:11.642 "hosts": [] 00:15:11.642 }, 00:15:11.642 { 00:15:11.642 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.642 "subtype": "NVMe", 00:15:11.642 "listen_addresses": [ 00:15:11.642 { 00:15:11.642 "trtype": "VFIOUSER", 00:15:11.642 "adrfam": "IPv4", 00:15:11.642 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.642 "trsvcid": "0" 00:15:11.642 } 00:15:11.642 ], 00:15:11.642 "allow_any_host": true, 00:15:11.642 "hosts": [], 00:15:11.642 "serial_number": "SPDK1", 00:15:11.642 "model_number": "SPDK bdev Controller", 00:15:11.642 "max_namespaces": 32, 00:15:11.642 "min_cntlid": 1, 00:15:11.642 "max_cntlid": 65519, 00:15:11.642 "namespaces": [ 00:15:11.642 { 00:15:11.642 "nsid": 1, 00:15:11.642 "bdev_name": "Malloc1", 00:15:11.642 "name": "Malloc1", 00:15:11.642 "nguid": "C459E5C42DB64110BA8A6DDD703050AA", 00:15:11.642 "uuid": "c459e5c4-2db6-4110-ba8a-6ddd703050aa" 00:15:11.642 } 00:15:11.642 ] 00:15:11.642 }, 00:15:11.642 { 00:15:11.642 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.642 "subtype": "NVMe", 00:15:11.642 "listen_addresses": [ 00:15:11.642 { 00:15:11.642 "trtype": "VFIOUSER", 00:15:11.642 "adrfam": "IPv4", 00:15:11.642 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.642 "trsvcid": "0" 00:15:11.642 } 00:15:11.642 ], 00:15:11.642 "allow_any_host": true, 00:15:11.642 "hosts": [], 00:15:11.642 "serial_number": "SPDK2", 00:15:11.642 "model_number": "SPDK bdev Controller", 00:15:11.642 "max_namespaces": 32, 00:15:11.642 "min_cntlid": 1, 00:15:11.642 "max_cntlid": 65519, 00:15:11.642 "namespaces": [ 00:15:11.642 { 00:15:11.642 "nsid": 1, 00:15:11.642 "bdev_name": "Malloc2", 00:15:11.642 "name": "Malloc2", 00:15:11.642 "nguid": "255BF30DAB3E462FBD5B968AD20521A0", 00:15:11.642 "uuid": "255bf30d-ab3e-462f-bd5b-968ad20521a0" 00:15:11.642 } 00:15:11.642 ] 00:15:11.642 } 00:15:11.642 ] 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2946931 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:11.642 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:11.904 Malloc3 00:15:11.904 [2024-11-05 04:26:25.313110] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.904 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:11.904 [2024-11-05 04:26:25.492319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.904 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.904 Asynchronous Event Request test 00:15:11.904 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.904 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.904 Registering asynchronous event callbacks... 00:15:11.904 Starting namespace attribute notice tests for all controllers... 00:15:11.904 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:11.904 aer_cb - Changed Namespace 00:15:11.904 Cleaning up... 00:15:12.165 [ 00:15:12.165 { 00:15:12.165 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:12.165 "subtype": "Discovery", 00:15:12.165 "listen_addresses": [], 00:15:12.165 "allow_any_host": true, 00:15:12.165 "hosts": [] 00:15:12.165 }, 00:15:12.165 { 00:15:12.165 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:12.165 "subtype": "NVMe", 00:15:12.165 "listen_addresses": [ 00:15:12.165 { 00:15:12.165 "trtype": "VFIOUSER", 00:15:12.165 "adrfam": "IPv4", 00:15:12.165 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:12.165 "trsvcid": "0" 00:15:12.165 } 00:15:12.165 ], 00:15:12.165 "allow_any_host": true, 00:15:12.165 "hosts": [], 00:15:12.165 "serial_number": "SPDK1", 00:15:12.165 "model_number": "SPDK bdev Controller", 00:15:12.165 "max_namespaces": 32, 00:15:12.165 "min_cntlid": 1, 00:15:12.165 "max_cntlid": 65519, 00:15:12.165 "namespaces": [ 00:15:12.165 { 00:15:12.165 "nsid": 1, 00:15:12.165 "bdev_name": "Malloc1", 00:15:12.165 "name": "Malloc1", 00:15:12.165 "nguid": "C459E5C42DB64110BA8A6DDD703050AA", 00:15:12.165 "uuid": "c459e5c4-2db6-4110-ba8a-6ddd703050aa" 00:15:12.165 }, 00:15:12.165 { 00:15:12.165 "nsid": 2, 00:15:12.165 "bdev_name": "Malloc3", 00:15:12.165 "name": "Malloc3", 00:15:12.165 "nguid": "FD62311FA8EE4595B67DE41D92395BB1", 00:15:12.165 "uuid": "fd62311f-a8ee-4595-b67d-e41d92395bb1" 00:15:12.165 } 00:15:12.165 ] 00:15:12.165 }, 00:15:12.165 { 00:15:12.165 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:12.165 "subtype": "NVMe", 00:15:12.165 "listen_addresses": [ 00:15:12.165 { 00:15:12.165 "trtype": "VFIOUSER", 00:15:12.165 "adrfam": "IPv4", 00:15:12.165 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:12.165 "trsvcid": "0" 00:15:12.165 } 00:15:12.165 ], 00:15:12.165 "allow_any_host": true, 00:15:12.165 "hosts": [], 00:15:12.165 "serial_number": "SPDK2", 00:15:12.165 "model_number": "SPDK bdev Controller", 00:15:12.165 "max_namespaces": 32, 00:15:12.165 "min_cntlid": 1, 00:15:12.165 "max_cntlid": 65519, 00:15:12.165 "namespaces": [ 00:15:12.165 { 00:15:12.165 "nsid": 1, 00:15:12.165 "bdev_name": "Malloc2", 00:15:12.165 "name": "Malloc2", 00:15:12.165 "nguid": "255BF30DAB3E462FBD5B968AD20521A0", 00:15:12.165 "uuid": "255bf30d-ab3e-462f-bd5b-968ad20521a0" 00:15:12.165 } 00:15:12.165 ] 00:15:12.165 } 00:15:12.165 ] 00:15:12.165 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2946931 00:15:12.165 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:12.165 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:12.165 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:12.166 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:12.166 [2024-11-05 04:26:25.727386] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:15:12.166 [2024-11-05 04:26:25.727429] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2946964 ] 00:15:12.166 [2024-11-05 04:26:25.781789] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:12.166 [2024-11-05 04:26:25.789983] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:12.166 [2024-11-05 04:26:25.790003] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f43dd283000 00:15:12.166 [2024-11-05 04:26:25.790981] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.166 [2024-11-05 04:26:25.791985] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.166 [2024-11-05 04:26:25.792991] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.166 [2024-11-05 04:26:25.793994] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.166 [2024-11-05 04:26:25.795003] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.166 [2024-11-05 04:26:25.796009] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.166 [2024-11-05 04:26:25.797014] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:12.166 [2024-11-05 04:26:25.798024] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:12.166 [2024-11-05 04:26:25.799033] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:12.166 [2024-11-05 04:26:25.799047] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f43dd278000 00:15:12.166 [2024-11-05 04:26:25.800374] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:12.430 [2024-11-05 04:26:25.821910] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:12.430 [2024-11-05 04:26:25.821936] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:15:12.430 [2024-11-05 04:26:25.823993] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:12.430 [2024-11-05 04:26:25.824041] nvme_pcie_common.c: 134:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:12.430 [2024-11-05 04:26:25.824127] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:15:12.430 [2024-11-05 04:26:25.824141] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:15:12.430 [2024-11-05 04:26:25.824147] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:15:12.430 [2024-11-05 04:26:25.824997] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:12.430 [2024-11-05 04:26:25.825008] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:15:12.430 [2024-11-05 04:26:25.825015] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:15:12.430 [2024-11-05 04:26:25.826003] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:12.430 [2024-11-05 04:26:25.826013] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:15:12.430 [2024-11-05 04:26:25.826021] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:15:12.430 [2024-11-05 04:26:25.827012] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:12.430 [2024-11-05 04:26:25.827023] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:12.430 [2024-11-05 04:26:25.828019] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:12.430 [2024-11-05 04:26:25.828029] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:15:12.430 [2024-11-05 04:26:25.828034] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:15:12.430 [2024-11-05 04:26:25.828041] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:12.430 [2024-11-05 04:26:25.828150] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:15:12.430 [2024-11-05 04:26:25.828155] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:12.430 [2024-11-05 04:26:25.828160] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:12.430 [2024-11-05 04:26:25.829023] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:12.430 [2024-11-05 04:26:25.830027] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:12.430 [2024-11-05 04:26:25.831031] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:12.430 [2024-11-05 04:26:25.832037] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.430 [2024-11-05 04:26:25.832080] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:12.430 [2024-11-05 04:26:25.833048] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:12.430 [2024-11-05 04:26:25.833057] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:12.430 [2024-11-05 04:26:25.833063] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.833084] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:15:12.430 [2024-11-05 04:26:25.833092] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.833105] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.430 [2024-11-05 04:26:25.833110] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.430 [2024-11-05 04:26:25.833114] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.430 [2024-11-05 04:26:25.833127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.430 [2024-11-05 04:26:25.839757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:12.430 [2024-11-05 04:26:25.839770] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:15:12.430 [2024-11-05 04:26:25.839776] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:15:12.430 [2024-11-05 04:26:25.839781] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:15:12.430 [2024-11-05 04:26:25.839786] nvme_ctrlr.c:2072:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:12.430 [2024-11-05 04:26:25.839791] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:15:12.430 [2024-11-05 04:26:25.839795] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:15:12.430 [2024-11-05 04:26:25.839800] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.839811] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.839821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:12.430 [2024-11-05 04:26:25.847752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:12.430 [2024-11-05 04:26:25.847768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.430 [2024-11-05 04:26:25.847777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.430 [2024-11-05 04:26:25.847785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.430 [2024-11-05 04:26:25.847794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:12.430 [2024-11-05 04:26:25.847798] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.847805] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.847815] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:12.430 [2024-11-05 04:26:25.855755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:12.430 [2024-11-05 04:26:25.855765] nvme_ctrlr.c:3011:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:15:12.430 [2024-11-05 04:26:25.855771] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.855778] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.855784] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.855793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.430 [2024-11-05 04:26:25.863754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:12.430 [2024-11-05 04:26:25.863819] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.863828] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:15:12.430 [2024-11-05 04:26:25.863836] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:12.430 [2024-11-05 04:26:25.863840] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:12.430 [2024-11-05 04:26:25.863844] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.430 [2024-11-05 04:26:25.863850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:12.430 [2024-11-05 04:26:25.871753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.871764] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:15:12.431 [2024-11-05 04:26:25.871782] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.871790] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.871797] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.431 [2024-11-05 04:26:25.871802] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.431 [2024-11-05 04:26:25.871805] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.431 [2024-11-05 04:26:25.871811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.431 [2024-11-05 04:26:25.879752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.879767] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.879775] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.879783] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:12.431 [2024-11-05 04:26:25.879787] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.431 [2024-11-05 04:26:25.879791] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.431 [2024-11-05 04:26:25.879797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.431 [2024-11-05 04:26:25.887752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.887762] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.887769] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.887778] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.887783] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.887788] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.887793] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.887799] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:15:12.431 [2024-11-05 04:26:25.887803] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:15:12.431 [2024-11-05 04:26:25.887808] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:15:12.431 [2024-11-05 04:26:25.887825] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:12.431 [2024-11-05 04:26:25.895751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.895766] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:12.431 [2024-11-05 04:26:25.903753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.903767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:12.431 [2024-11-05 04:26:25.911751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.911765] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:12.431 [2024-11-05 04:26:25.919751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.919768] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:12.431 [2024-11-05 04:26:25.919772] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:12.431 [2024-11-05 04:26:25.919776] nvme_pcie_common.c:1241:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:12.431 [2024-11-05 04:26:25.919780] nvme_pcie_common.c:1257:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:12.431 [2024-11-05 04:26:25.919783] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:12.431 [2024-11-05 04:26:25.919790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:12.431 [2024-11-05 04:26:25.919798] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:12.431 [2024-11-05 04:26:25.919802] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:12.431 [2024-11-05 04:26:25.919806] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.431 [2024-11-05 04:26:25.919812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:12.431 [2024-11-05 04:26:25.919820] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:12.431 [2024-11-05 04:26:25.919824] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:12.431 [2024-11-05 04:26:25.919827] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.431 [2024-11-05 04:26:25.919834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:12.431 [2024-11-05 04:26:25.919843] nvme_pcie_common.c:1204:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:12.431 [2024-11-05 04:26:25.919847] nvme_pcie_common.c:1232:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:12.431 [2024-11-05 04:26:25.919851] nvme_pcie_common.c:1292:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:12.431 [2024-11-05 04:26:25.919857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:12.431 [2024-11-05 04:26:25.927754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.927769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.927780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:12.431 [2024-11-05 04:26:25.927787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:12.431 ===================================================== 00:15:12.431 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.431 ===================================================== 00:15:12.431 Controller Capabilities/Features 00:15:12.431 ================================ 00:15:12.431 Vendor ID: 4e58 00:15:12.431 Subsystem Vendor ID: 4e58 00:15:12.431 Serial Number: SPDK2 00:15:12.431 Model Number: SPDK bdev Controller 00:15:12.431 Firmware Version: 25.01 00:15:12.431 Recommended Arb Burst: 6 00:15:12.431 IEEE OUI Identifier: 8d 6b 50 00:15:12.431 Multi-path I/O 00:15:12.431 May have multiple subsystem ports: Yes 00:15:12.431 May have multiple controllers: Yes 00:15:12.431 Associated with SR-IOV VF: No 00:15:12.431 Max Data Transfer Size: 131072 00:15:12.431 Max Number of Namespaces: 32 00:15:12.431 Max Number of I/O Queues: 127 00:15:12.431 NVMe Specification Version (VS): 1.3 00:15:12.431 NVMe Specification Version (Identify): 1.3 00:15:12.431 Maximum Queue Entries: 256 00:15:12.431 Contiguous Queues Required: Yes 00:15:12.431 Arbitration Mechanisms Supported 00:15:12.431 Weighted Round Robin: Not Supported 00:15:12.431 Vendor Specific: Not Supported 00:15:12.431 Reset Timeout: 15000 ms 00:15:12.431 Doorbell Stride: 4 bytes 00:15:12.431 NVM Subsystem Reset: Not Supported 00:15:12.431 Command Sets Supported 00:15:12.431 NVM Command Set: Supported 00:15:12.431 Boot Partition: Not Supported 00:15:12.431 Memory Page Size Minimum: 4096 bytes 00:15:12.431 Memory Page Size Maximum: 4096 bytes 00:15:12.431 Persistent Memory Region: Not Supported 00:15:12.431 Optional Asynchronous Events Supported 00:15:12.431 Namespace Attribute Notices: Supported 00:15:12.431 Firmware Activation Notices: Not Supported 00:15:12.431 ANA Change Notices: Not Supported 00:15:12.431 PLE Aggregate Log Change Notices: Not Supported 00:15:12.431 LBA Status Info Alert Notices: Not Supported 00:15:12.431 EGE Aggregate Log Change Notices: Not Supported 00:15:12.431 Normal NVM Subsystem Shutdown event: Not Supported 00:15:12.431 Zone Descriptor Change Notices: Not Supported 00:15:12.431 Discovery Log Change Notices: Not Supported 00:15:12.431 Controller Attributes 00:15:12.431 128-bit Host Identifier: Supported 00:15:12.431 Non-Operational Permissive Mode: Not Supported 00:15:12.431 NVM Sets: Not Supported 00:15:12.431 Read Recovery Levels: Not Supported 00:15:12.431 Endurance Groups: Not Supported 00:15:12.431 Predictable Latency Mode: Not Supported 00:15:12.431 Traffic Based Keep ALive: Not Supported 00:15:12.431 Namespace Granularity: Not Supported 00:15:12.431 SQ Associations: Not Supported 00:15:12.431 UUID List: Not Supported 00:15:12.431 Multi-Domain Subsystem: Not Supported 00:15:12.431 Fixed Capacity Management: Not Supported 00:15:12.431 Variable Capacity Management: Not Supported 00:15:12.431 Delete Endurance Group: Not Supported 00:15:12.431 Delete NVM Set: Not Supported 00:15:12.431 Extended LBA Formats Supported: Not Supported 00:15:12.431 Flexible Data Placement Supported: Not Supported 00:15:12.431 00:15:12.431 Controller Memory Buffer Support 00:15:12.431 ================================ 00:15:12.431 Supported: No 00:15:12.431 00:15:12.431 Persistent Memory Region Support 00:15:12.432 ================================ 00:15:12.432 Supported: No 00:15:12.432 00:15:12.432 Admin Command Set Attributes 00:15:12.432 ============================ 00:15:12.432 Security Send/Receive: Not Supported 00:15:12.432 Format NVM: Not Supported 00:15:12.432 Firmware Activate/Download: Not Supported 00:15:12.432 Namespace Management: Not Supported 00:15:12.432 Device Self-Test: Not Supported 00:15:12.432 Directives: Not Supported 00:15:12.432 NVMe-MI: Not Supported 00:15:12.432 Virtualization Management: Not Supported 00:15:12.432 Doorbell Buffer Config: Not Supported 00:15:12.432 Get LBA Status Capability: Not Supported 00:15:12.432 Command & Feature Lockdown Capability: Not Supported 00:15:12.432 Abort Command Limit: 4 00:15:12.432 Async Event Request Limit: 4 00:15:12.432 Number of Firmware Slots: N/A 00:15:12.432 Firmware Slot 1 Read-Only: N/A 00:15:12.432 Firmware Activation Without Reset: N/A 00:15:12.432 Multiple Update Detection Support: N/A 00:15:12.432 Firmware Update Granularity: No Information Provided 00:15:12.432 Per-Namespace SMART Log: No 00:15:12.432 Asymmetric Namespace Access Log Page: Not Supported 00:15:12.432 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:12.432 Command Effects Log Page: Supported 00:15:12.432 Get Log Page Extended Data: Supported 00:15:12.432 Telemetry Log Pages: Not Supported 00:15:12.432 Persistent Event Log Pages: Not Supported 00:15:12.432 Supported Log Pages Log Page: May Support 00:15:12.432 Commands Supported & Effects Log Page: Not Supported 00:15:12.432 Feature Identifiers & Effects Log Page:May Support 00:15:12.432 NVMe-MI Commands & Effects Log Page: May Support 00:15:12.432 Data Area 4 for Telemetry Log: Not Supported 00:15:12.432 Error Log Page Entries Supported: 128 00:15:12.432 Keep Alive: Supported 00:15:12.432 Keep Alive Granularity: 10000 ms 00:15:12.432 00:15:12.432 NVM Command Set Attributes 00:15:12.432 ========================== 00:15:12.432 Submission Queue Entry Size 00:15:12.432 Max: 64 00:15:12.432 Min: 64 00:15:12.432 Completion Queue Entry Size 00:15:12.432 Max: 16 00:15:12.432 Min: 16 00:15:12.432 Number of Namespaces: 32 00:15:12.432 Compare Command: Supported 00:15:12.432 Write Uncorrectable Command: Not Supported 00:15:12.432 Dataset Management Command: Supported 00:15:12.432 Write Zeroes Command: Supported 00:15:12.432 Set Features Save Field: Not Supported 00:15:12.432 Reservations: Not Supported 00:15:12.432 Timestamp: Not Supported 00:15:12.432 Copy: Supported 00:15:12.432 Volatile Write Cache: Present 00:15:12.432 Atomic Write Unit (Normal): 1 00:15:12.432 Atomic Write Unit (PFail): 1 00:15:12.432 Atomic Compare & Write Unit: 1 00:15:12.432 Fused Compare & Write: Supported 00:15:12.432 Scatter-Gather List 00:15:12.432 SGL Command Set: Supported (Dword aligned) 00:15:12.432 SGL Keyed: Not Supported 00:15:12.432 SGL Bit Bucket Descriptor: Not Supported 00:15:12.432 SGL Metadata Pointer: Not Supported 00:15:12.432 Oversized SGL: Not Supported 00:15:12.432 SGL Metadata Address: Not Supported 00:15:12.432 SGL Offset: Not Supported 00:15:12.432 Transport SGL Data Block: Not Supported 00:15:12.432 Replay Protected Memory Block: Not Supported 00:15:12.432 00:15:12.432 Firmware Slot Information 00:15:12.432 ========================= 00:15:12.432 Active slot: 1 00:15:12.432 Slot 1 Firmware Revision: 25.01 00:15:12.432 00:15:12.432 00:15:12.432 Commands Supported and Effects 00:15:12.432 ============================== 00:15:12.432 Admin Commands 00:15:12.432 -------------- 00:15:12.432 Get Log Page (02h): Supported 00:15:12.432 Identify (06h): Supported 00:15:12.432 Abort (08h): Supported 00:15:12.432 Set Features (09h): Supported 00:15:12.432 Get Features (0Ah): Supported 00:15:12.432 Asynchronous Event Request (0Ch): Supported 00:15:12.432 Keep Alive (18h): Supported 00:15:12.432 I/O Commands 00:15:12.432 ------------ 00:15:12.432 Flush (00h): Supported LBA-Change 00:15:12.432 Write (01h): Supported LBA-Change 00:15:12.432 Read (02h): Supported 00:15:12.432 Compare (05h): Supported 00:15:12.432 Write Zeroes (08h): Supported LBA-Change 00:15:12.432 Dataset Management (09h): Supported LBA-Change 00:15:12.432 Copy (19h): Supported LBA-Change 00:15:12.432 00:15:12.432 Error Log 00:15:12.432 ========= 00:15:12.432 00:15:12.432 Arbitration 00:15:12.432 =========== 00:15:12.432 Arbitration Burst: 1 00:15:12.432 00:15:12.432 Power Management 00:15:12.432 ================ 00:15:12.432 Number of Power States: 1 00:15:12.432 Current Power State: Power State #0 00:15:12.432 Power State #0: 00:15:12.432 Max Power: 0.00 W 00:15:12.432 Non-Operational State: Operational 00:15:12.432 Entry Latency: Not Reported 00:15:12.432 Exit Latency: Not Reported 00:15:12.432 Relative Read Throughput: 0 00:15:12.432 Relative Read Latency: 0 00:15:12.432 Relative Write Throughput: 0 00:15:12.432 Relative Write Latency: 0 00:15:12.432 Idle Power: Not Reported 00:15:12.432 Active Power: Not Reported 00:15:12.432 Non-Operational Permissive Mode: Not Supported 00:15:12.432 00:15:12.432 Health Information 00:15:12.432 ================== 00:15:12.432 Critical Warnings: 00:15:12.432 Available Spare Space: OK 00:15:12.432 Temperature: OK 00:15:12.432 Device Reliability: OK 00:15:12.432 Read Only: No 00:15:12.432 Volatile Memory Backup: OK 00:15:12.432 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:12.432 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:12.432 Available Spare: 0% 00:15:12.432 Available Sp[2024-11-05 04:26:25.927889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:12.432 [2024-11-05 04:26:25.935752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:12.432 [2024-11-05 04:26:25.935786] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:15:12.432 [2024-11-05 04:26:25.935797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.432 [2024-11-05 04:26:25.935803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.432 [2024-11-05 04:26:25.935810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.432 [2024-11-05 04:26:25.935816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:12.432 [2024-11-05 04:26:25.935864] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:12.432 [2024-11-05 04:26:25.935875] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:12.432 [2024-11-05 04:26:25.936872] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.432 [2024-11-05 04:26:25.936922] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:15:12.432 [2024-11-05 04:26:25.936929] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:15:12.432 [2024-11-05 04:26:25.937875] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:12.432 [2024-11-05 04:26:25.937887] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:15:12.432 [2024-11-05 04:26:25.937935] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:12.432 [2024-11-05 04:26:25.940754] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:12.432 are Threshold: 0% 00:15:12.432 Life Percentage Used: 0% 00:15:12.432 Data Units Read: 0 00:15:12.432 Data Units Written: 0 00:15:12.432 Host Read Commands: 0 00:15:12.432 Host Write Commands: 0 00:15:12.432 Controller Busy Time: 0 minutes 00:15:12.432 Power Cycles: 0 00:15:12.432 Power On Hours: 0 hours 00:15:12.432 Unsafe Shutdowns: 0 00:15:12.432 Unrecoverable Media Errors: 0 00:15:12.432 Lifetime Error Log Entries: 0 00:15:12.432 Warning Temperature Time: 0 minutes 00:15:12.432 Critical Temperature Time: 0 minutes 00:15:12.432 00:15:12.432 Number of Queues 00:15:12.432 ================ 00:15:12.432 Number of I/O Submission Queues: 127 00:15:12.432 Number of I/O Completion Queues: 127 00:15:12.432 00:15:12.432 Active Namespaces 00:15:12.432 ================= 00:15:12.432 Namespace ID:1 00:15:12.432 Error Recovery Timeout: Unlimited 00:15:12.432 Command Set Identifier: NVM (00h) 00:15:12.432 Deallocate: Supported 00:15:12.432 Deallocated/Unwritten Error: Not Supported 00:15:12.432 Deallocated Read Value: Unknown 00:15:12.432 Deallocate in Write Zeroes: Not Supported 00:15:12.432 Deallocated Guard Field: 0xFFFF 00:15:12.432 Flush: Supported 00:15:12.432 Reservation: Supported 00:15:12.432 Namespace Sharing Capabilities: Multiple Controllers 00:15:12.432 Size (in LBAs): 131072 (0GiB) 00:15:12.432 Capacity (in LBAs): 131072 (0GiB) 00:15:12.432 Utilization (in LBAs): 131072 (0GiB) 00:15:12.432 NGUID: 255BF30DAB3E462FBD5B968AD20521A0 00:15:12.433 UUID: 255bf30d-ab3e-462f-bd5b-968ad20521a0 00:15:12.433 Thin Provisioning: Not Supported 00:15:12.433 Per-NS Atomic Units: Yes 00:15:12.433 Atomic Boundary Size (Normal): 0 00:15:12.433 Atomic Boundary Size (PFail): 0 00:15:12.433 Atomic Boundary Offset: 0 00:15:12.433 Maximum Single Source Range Length: 65535 00:15:12.433 Maximum Copy Length: 65535 00:15:12.433 Maximum Source Range Count: 1 00:15:12.433 NGUID/EUI64 Never Reused: No 00:15:12.433 Namespace Write Protected: No 00:15:12.433 Number of LBA Formats: 1 00:15:12.433 Current LBA Format: LBA Format #00 00:15:12.433 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:12.433 00:15:12.433 04:26:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:12.695 [2024-11-05 04:26:26.146131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.985 Initializing NVMe Controllers 00:15:17.985 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.985 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:17.985 Initialization complete. Launching workers. 00:15:17.985 ======================================================== 00:15:17.985 Latency(us) 00:15:17.985 Device Information : IOPS MiB/s Average min max 00:15:17.985 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40039.60 156.40 3199.22 846.31 10776.68 00:15:17.985 ======================================================== 00:15:17.985 Total : 40039.60 156.40 3199.22 846.31 10776.68 00:15:17.985 00:15:17.985 [2024-11-05 04:26:31.255941] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.985 04:26:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:17.985 [2024-11-05 04:26:31.451525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.275 Initializing NVMe Controllers 00:15:23.275 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:23.275 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:23.275 Initialization complete. Launching workers. 00:15:23.275 ======================================================== 00:15:23.275 Latency(us) 00:15:23.275 Device Information : IOPS MiB/s Average min max 00:15:23.275 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34690.75 135.51 3689.18 1112.89 9674.11 00:15:23.275 ======================================================== 00:15:23.275 Total : 34690.75 135.51 3689.18 1112.89 9674.11 00:15:23.275 00:15:23.275 [2024-11-05 04:26:36.471182] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.275 04:26:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:23.275 [2024-11-05 04:26:36.680355] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.564 [2024-11-05 04:26:41.816837] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.564 Initializing NVMe Controllers 00:15:28.564 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.564 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.564 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:28.564 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:28.564 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:28.564 Initialization complete. Launching workers. 00:15:28.564 Starting thread on core 2 00:15:28.564 Starting thread on core 3 00:15:28.564 Starting thread on core 1 00:15:28.564 04:26:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:28.564 [2024-11-05 04:26:42.097067] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.868 [2024-11-05 04:26:45.154018] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.868 Initializing NVMe Controllers 00:15:31.868 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.868 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.868 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:31.868 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:31.868 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:31.868 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:31.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:31.868 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:31.868 Initialization complete. Launching workers. 00:15:31.868 Starting thread on core 1 with urgent priority queue 00:15:31.868 Starting thread on core 2 with urgent priority queue 00:15:31.868 Starting thread on core 3 with urgent priority queue 00:15:31.868 Starting thread on core 0 with urgent priority queue 00:15:31.868 SPDK bdev Controller (SPDK2 ) core 0: 13671.33 IO/s 7.31 secs/100000 ios 00:15:31.868 SPDK bdev Controller (SPDK2 ) core 1: 8145.33 IO/s 12.28 secs/100000 ios 00:15:31.868 SPDK bdev Controller (SPDK2 ) core 2: 11735.67 IO/s 8.52 secs/100000 ios 00:15:31.868 SPDK bdev Controller (SPDK2 ) core 3: 9884.00 IO/s 10.12 secs/100000 ios 00:15:31.868 ======================================================== 00:15:31.868 00:15:31.868 04:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:31.868 [2024-11-05 04:26:45.438960] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:31.868 Initializing NVMe Controllers 00:15:31.868 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.868 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:31.868 Namespace ID: 1 size: 0GB 00:15:31.868 Initialization complete. 00:15:31.868 INFO: using host memory buffer for IO 00:15:31.868 Hello world! 00:15:31.868 [2024-11-05 04:26:45.452039] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:31.868 04:26:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:32.130 [2024-11-05 04:26:45.735025] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.516 Initializing NVMe Controllers 00:15:33.516 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.516 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.516 Initialization complete. Launching workers. 00:15:33.516 submit (in ns) avg, min, max = 8921.5, 3901.7, 4006785.0 00:15:33.516 complete (in ns) avg, min, max = 16642.3, 2384.2, 4004571.7 00:15:33.516 00:15:33.516 Submit histogram 00:15:33.516 ================ 00:15:33.516 Range in us Cumulative Count 00:15:33.516 3.893 - 3.920: 0.7724% ( 146) 00:15:33.516 3.920 - 3.947: 5.9997% ( 988) 00:15:33.516 3.947 - 3.973: 15.3854% ( 1774) 00:15:33.516 3.973 - 4.000: 26.2314% ( 2050) 00:15:33.516 4.000 - 4.027: 36.1991% ( 1884) 00:15:33.516 4.027 - 4.053: 46.8176% ( 2007) 00:15:33.516 4.053 - 4.080: 62.4676% ( 2958) 00:15:33.517 4.080 - 4.107: 79.0963% ( 3143) 00:15:33.517 4.107 - 4.133: 90.6143% ( 2177) 00:15:33.517 4.133 - 4.160: 96.6721% ( 1145) 00:15:33.517 4.160 - 4.187: 98.8678% ( 415) 00:15:33.517 4.187 - 4.213: 99.3175% ( 85) 00:15:33.517 4.213 - 4.240: 99.4445% ( 24) 00:15:33.517 4.240 - 4.267: 99.4603% ( 3) 00:15:33.517 4.267 - 4.293: 99.4656% ( 1) 00:15:33.517 4.293 - 4.320: 99.4709% ( 1) 00:15:33.517 4.347 - 4.373: 99.4762% ( 1) 00:15:33.517 4.400 - 4.427: 99.4815% ( 1) 00:15:33.517 4.587 - 4.613: 99.4868% ( 1) 00:15:33.517 4.613 - 4.640: 99.4921% ( 1) 00:15:33.517 4.720 - 4.747: 99.4974% ( 1) 00:15:33.517 4.827 - 4.853: 99.5027% ( 1) 00:15:33.517 4.987 - 5.013: 99.5080% ( 1) 00:15:33.517 5.040 - 5.067: 99.5133% ( 1) 00:15:33.517 5.253 - 5.280: 99.5185% ( 1) 00:15:33.517 5.333 - 5.360: 99.5238% ( 1) 00:15:33.517 5.413 - 5.440: 99.5291% ( 1) 00:15:33.517 5.520 - 5.547: 99.5344% ( 1) 00:15:33.517 5.547 - 5.573: 99.5397% ( 1) 00:15:33.517 5.600 - 5.627: 99.5450% ( 1) 00:15:33.517 5.627 - 5.653: 99.5503% ( 1) 00:15:33.517 5.733 - 5.760: 99.5556% ( 1) 00:15:33.517 5.787 - 5.813: 99.5715% ( 3) 00:15:33.517 5.813 - 5.840: 99.5767% ( 1) 00:15:33.517 5.840 - 5.867: 99.5820% ( 1) 00:15:33.517 5.867 - 5.893: 99.5926% ( 2) 00:15:33.517 5.920 - 5.947: 99.5979% ( 1) 00:15:33.517 5.947 - 5.973: 99.6085% ( 2) 00:15:33.517 5.973 - 6.000: 99.6191% ( 2) 00:15:33.517 6.000 - 6.027: 99.6244% ( 1) 00:15:33.517 6.027 - 6.053: 99.6349% ( 2) 00:15:33.517 6.133 - 6.160: 99.6455% ( 2) 00:15:33.517 6.213 - 6.240: 99.6561% ( 2) 00:15:33.517 6.320 - 6.347: 99.6667% ( 2) 00:15:33.517 6.347 - 6.373: 99.6773% ( 2) 00:15:33.517 6.373 - 6.400: 99.6826% ( 1) 00:15:33.517 6.427 - 6.453: 99.6878% ( 1) 00:15:33.517 6.453 - 6.480: 99.6984% ( 2) 00:15:33.517 6.533 - 6.560: 99.7090% ( 2) 00:15:33.517 6.613 - 6.640: 99.7143% ( 1) 00:15:33.517 6.667 - 6.693: 99.7196% ( 1) 00:15:33.517 6.720 - 6.747: 99.7249% ( 1) 00:15:33.517 6.827 - 6.880: 99.7302% ( 1) 00:15:33.517 6.880 - 6.933: 99.7355% ( 1) 00:15:33.517 6.933 - 6.987: 99.7408% ( 1) 00:15:33.517 6.987 - 7.040: 99.7513% ( 2) 00:15:33.517 7.040 - 7.093: 99.7566% ( 1) 00:15:33.517 7.093 - 7.147: 99.7619% ( 1) 00:15:33.517 7.147 - 7.200: 99.7725% ( 2) 00:15:33.517 7.253 - 7.307: 99.7884% ( 3) 00:15:33.517 7.307 - 7.360: 99.7990% ( 2) 00:15:33.517 7.360 - 7.413: 99.8095% ( 2) 00:15:33.517 7.413 - 7.467: 99.8148% ( 1) 00:15:33.517 7.467 - 7.520: 99.8201% ( 1) 00:15:33.517 7.573 - 7.627: 99.8307% ( 2) 00:15:33.517 7.627 - 7.680: 99.8360% ( 1) 00:15:33.517 7.680 - 7.733: 99.8413% ( 1) 00:15:33.517 7.787 - 7.840: 99.8466% ( 1) 00:15:33.517 7.840 - 7.893: 99.8519% ( 1) 00:15:33.517 7.947 - 8.000: 99.8572% ( 1) 00:15:33.517 8.107 - 8.160: 99.8624% ( 1) 00:15:33.517 8.373 - 8.427: 99.8677% ( 1) 00:15:33.517 8.693 - 8.747: 99.8730% ( 1) 00:15:33.517 10.880 - 10.933: 99.8783% ( 1) 00:15:33.517 3986.773 - 4014.080: 100.0000% ( 23) 00:15:33.517 00:15:33.517 Complete histogram 00:15:33.517 ================== 00:15:33.517 Range in us Cumulative Count 00:15:33.517 2.373 - 2.387: 0.0053% ( 1) 00:15:33.517 2.387 - 2.400: 0.5291% ( 99) 00:15:33.517 2.400 - [2024-11-05 04:26:46.829430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.517 2.413: 0.6190% ( 17) 00:15:33.517 2.413 - 2.427: 0.7724% ( 29) 00:15:33.517 2.427 - 2.440: 35.9716% ( 6653) 00:15:33.517 2.440 - 2.453: 46.4949% ( 1989) 00:15:33.517 2.453 - 2.467: 67.2028% ( 3914) 00:15:33.517 2.467 - 2.480: 77.1441% ( 1879) 00:15:33.517 2.480 - 2.493: 79.9958% ( 539) 00:15:33.517 2.493 - 2.507: 82.4295% ( 460) 00:15:33.517 2.507 - 2.520: 88.0059% ( 1054) 00:15:33.517 2.520 - 2.533: 93.1538% ( 973) 00:15:33.517 2.533 - 2.547: 96.3282% ( 600) 00:15:33.517 2.547 - 2.560: 98.3599% ( 384) 00:15:33.517 2.560 - 2.573: 99.1376% ( 147) 00:15:33.517 2.573 - 2.587: 99.3863% ( 47) 00:15:33.517 2.587 - 2.600: 99.4180% ( 6) 00:15:33.517 2.627 - 2.640: 99.4233% ( 1) 00:15:33.517 2.693 - 2.707: 99.4286% ( 1) 00:15:33.517 3.080 - 3.093: 99.4339% ( 1) 00:15:33.517 3.093 - 3.107: 99.4392% ( 1) 00:15:33.517 4.187 - 4.213: 99.4498% ( 2) 00:15:33.517 4.240 - 4.267: 99.4551% ( 1) 00:15:33.517 4.320 - 4.347: 99.4603% ( 1) 00:15:33.517 4.347 - 4.373: 99.4656% ( 1) 00:15:33.517 4.373 - 4.400: 99.4709% ( 1) 00:15:33.517 4.400 - 4.427: 99.4815% ( 2) 00:15:33.517 4.427 - 4.453: 99.4868% ( 1) 00:15:33.517 4.453 - 4.480: 99.4974% ( 2) 00:15:33.517 4.507 - 4.533: 99.5027% ( 1) 00:15:33.517 4.613 - 4.640: 99.5080% ( 1) 00:15:33.517 4.720 - 4.747: 99.5133% ( 1) 00:15:33.517 4.747 - 4.773: 99.5185% ( 1) 00:15:33.517 4.853 - 4.880: 99.5238% ( 1) 00:15:33.517 4.880 - 4.907: 99.5291% ( 1) 00:15:33.517 4.933 - 4.960: 99.5344% ( 1) 00:15:33.517 4.987 - 5.013: 99.5397% ( 1) 00:15:33.517 5.093 - 5.120: 99.5503% ( 2) 00:15:33.517 5.147 - 5.173: 99.5556% ( 1) 00:15:33.517 5.253 - 5.280: 99.5609% ( 1) 00:15:33.517 5.307 - 5.333: 99.5662% ( 1) 00:15:33.517 5.360 - 5.387: 99.5715% ( 1) 00:15:33.517 5.547 - 5.573: 99.5767% ( 1) 00:15:33.517 5.573 - 5.600: 99.5820% ( 1) 00:15:33.517 5.627 - 5.653: 99.5873% ( 1) 00:15:33.517 5.680 - 5.707: 99.5926% ( 1) 00:15:33.517 5.920 - 5.947: 99.5979% ( 1) 00:15:33.517 5.973 - 6.000: 99.6032% ( 1) 00:15:33.517 6.373 - 6.400: 99.6085% ( 1) 00:15:33.517 6.480 - 6.507: 99.6138% ( 1) 00:15:33.517 6.507 - 6.533: 99.6191% ( 1) 00:15:33.517 6.773 - 6.800: 99.6244% ( 1) 00:15:33.517 6.933 - 6.987: 99.6296% ( 1) 00:15:33.517 33.920 - 34.133: 99.6349% ( 1) 00:15:33.517 40.747 - 40.960: 99.6402% ( 1) 00:15:33.517 146.773 - 147.627: 99.6455% ( 1) 00:15:33.517 3986.773 - 4014.080: 100.0000% ( 67) 00:15:33.517 00:15:33.517 04:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:33.517 04:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:33.517 04:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:33.517 04:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:33.517 04:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:33.517 [ 00:15:33.517 { 00:15:33.517 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:33.517 "subtype": "Discovery", 00:15:33.517 "listen_addresses": [], 00:15:33.517 "allow_any_host": true, 00:15:33.517 "hosts": [] 00:15:33.517 }, 00:15:33.517 { 00:15:33.517 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:33.517 "subtype": "NVMe", 00:15:33.517 "listen_addresses": [ 00:15:33.517 { 00:15:33.517 "trtype": "VFIOUSER", 00:15:33.517 "adrfam": "IPv4", 00:15:33.517 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:33.517 "trsvcid": "0" 00:15:33.517 } 00:15:33.517 ], 00:15:33.517 "allow_any_host": true, 00:15:33.517 "hosts": [], 00:15:33.517 "serial_number": "SPDK1", 00:15:33.517 "model_number": "SPDK bdev Controller", 00:15:33.517 "max_namespaces": 32, 00:15:33.517 "min_cntlid": 1, 00:15:33.517 "max_cntlid": 65519, 00:15:33.517 "namespaces": [ 00:15:33.517 { 00:15:33.517 "nsid": 1, 00:15:33.517 "bdev_name": "Malloc1", 00:15:33.517 "name": "Malloc1", 00:15:33.517 "nguid": "C459E5C42DB64110BA8A6DDD703050AA", 00:15:33.517 "uuid": "c459e5c4-2db6-4110-ba8a-6ddd703050aa" 00:15:33.517 }, 00:15:33.517 { 00:15:33.517 "nsid": 2, 00:15:33.517 "bdev_name": "Malloc3", 00:15:33.517 "name": "Malloc3", 00:15:33.517 "nguid": "FD62311FA8EE4595B67DE41D92395BB1", 00:15:33.517 "uuid": "fd62311f-a8ee-4595-b67d-e41d92395bb1" 00:15:33.517 } 00:15:33.517 ] 00:15:33.517 }, 00:15:33.517 { 00:15:33.517 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:33.517 "subtype": "NVMe", 00:15:33.517 "listen_addresses": [ 00:15:33.517 { 00:15:33.517 "trtype": "VFIOUSER", 00:15:33.517 "adrfam": "IPv4", 00:15:33.517 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:33.517 "trsvcid": "0" 00:15:33.517 } 00:15:33.517 ], 00:15:33.517 "allow_any_host": true, 00:15:33.517 "hosts": [], 00:15:33.517 "serial_number": "SPDK2", 00:15:33.517 "model_number": "SPDK bdev Controller", 00:15:33.517 "max_namespaces": 32, 00:15:33.517 "min_cntlid": 1, 00:15:33.517 "max_cntlid": 65519, 00:15:33.517 "namespaces": [ 00:15:33.517 { 00:15:33.517 "nsid": 1, 00:15:33.518 "bdev_name": "Malloc2", 00:15:33.518 "name": "Malloc2", 00:15:33.518 "nguid": "255BF30DAB3E462FBD5B968AD20521A0", 00:15:33.518 "uuid": "255bf30d-ab3e-462f-bd5b-968ad20521a0" 00:15:33.518 } 00:15:33.518 ] 00:15:33.518 } 00:15:33.518 ] 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2951307 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:33.518 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:33.779 Malloc4 00:15:33.779 [2024-11-05 04:26:47.247132] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.779 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:34.040 [2024-11-05 04:26:47.425223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.040 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:34.040 Asynchronous Event Request test 00:15:34.040 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.040 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:34.040 Registering asynchronous event callbacks... 00:15:34.040 Starting namespace attribute notice tests for all controllers... 00:15:34.040 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:34.040 aer_cb - Changed Namespace 00:15:34.040 Cleaning up... 00:15:34.040 [ 00:15:34.040 { 00:15:34.040 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:34.040 "subtype": "Discovery", 00:15:34.040 "listen_addresses": [], 00:15:34.040 "allow_any_host": true, 00:15:34.040 "hosts": [] 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:34.040 "subtype": "NVMe", 00:15:34.040 "listen_addresses": [ 00:15:34.040 { 00:15:34.040 "trtype": "VFIOUSER", 00:15:34.040 "adrfam": "IPv4", 00:15:34.040 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:34.040 "trsvcid": "0" 00:15:34.040 } 00:15:34.040 ], 00:15:34.040 "allow_any_host": true, 00:15:34.040 "hosts": [], 00:15:34.040 "serial_number": "SPDK1", 00:15:34.040 "model_number": "SPDK bdev Controller", 00:15:34.040 "max_namespaces": 32, 00:15:34.040 "min_cntlid": 1, 00:15:34.040 "max_cntlid": 65519, 00:15:34.040 "namespaces": [ 00:15:34.040 { 00:15:34.040 "nsid": 1, 00:15:34.040 "bdev_name": "Malloc1", 00:15:34.040 "name": "Malloc1", 00:15:34.040 "nguid": "C459E5C42DB64110BA8A6DDD703050AA", 00:15:34.040 "uuid": "c459e5c4-2db6-4110-ba8a-6ddd703050aa" 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "nsid": 2, 00:15:34.040 "bdev_name": "Malloc3", 00:15:34.040 "name": "Malloc3", 00:15:34.040 "nguid": "FD62311FA8EE4595B67DE41D92395BB1", 00:15:34.040 "uuid": "fd62311f-a8ee-4595-b67d-e41d92395bb1" 00:15:34.040 } 00:15:34.040 ] 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:34.040 "subtype": "NVMe", 00:15:34.040 "listen_addresses": [ 00:15:34.040 { 00:15:34.040 "trtype": "VFIOUSER", 00:15:34.040 "adrfam": "IPv4", 00:15:34.040 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:34.040 "trsvcid": "0" 00:15:34.040 } 00:15:34.040 ], 00:15:34.040 "allow_any_host": true, 00:15:34.040 "hosts": [], 00:15:34.040 "serial_number": "SPDK2", 00:15:34.040 "model_number": "SPDK bdev Controller", 00:15:34.040 "max_namespaces": 32, 00:15:34.040 "min_cntlid": 1, 00:15:34.040 "max_cntlid": 65519, 00:15:34.040 "namespaces": [ 00:15:34.040 { 00:15:34.040 "nsid": 1, 00:15:34.040 "bdev_name": "Malloc2", 00:15:34.040 "name": "Malloc2", 00:15:34.040 "nguid": "255BF30DAB3E462FBD5B968AD20521A0", 00:15:34.040 "uuid": "255bf30d-ab3e-462f-bd5b-968ad20521a0" 00:15:34.040 }, 00:15:34.040 { 00:15:34.040 "nsid": 2, 00:15:34.040 "bdev_name": "Malloc4", 00:15:34.040 "name": "Malloc4", 00:15:34.040 "nguid": "74B23ACAFA8E4BA194B4BA7A121831E4", 00:15:34.040 "uuid": "74b23aca-fa8e-4ba1-94b4-ba7a121831e4" 00:15:34.040 } 00:15:34.040 ] 00:15:34.040 } 00:15:34.040 ] 00:15:34.040 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2951307 00:15:34.040 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:34.040 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2941970 00:15:34.040 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2941970 ']' 00:15:34.040 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2941970 00:15:34.040 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:34.040 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:34.040 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2941970 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2941970' 00:15:34.302 killing process with pid 2941970 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2941970 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2941970 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2951327 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2951327' 00:15:34.302 Process pid: 2951327 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2951327 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2951327 ']' 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:34.302 04:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:34.302 [2024-11-05 04:26:47.892870] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:34.302 [2024-11-05 04:26:47.893798] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:15:34.302 [2024-11-05 04:26:47.893844] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:34.563 [2024-11-05 04:26:47.963797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:34.563 [2024-11-05 04:26:47.998886] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:34.563 [2024-11-05 04:26:47.998917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:34.563 [2024-11-05 04:26:47.998926] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:34.563 [2024-11-05 04:26:47.998933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:34.563 [2024-11-05 04:26:47.998939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:34.563 [2024-11-05 04:26:48.000556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.563 [2024-11-05 04:26:48.000668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:34.563 [2024-11-05 04:26:48.000803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.563 [2024-11-05 04:26:48.000804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:34.563 [2024-11-05 04:26:48.055641] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:34.563 [2024-11-05 04:26:48.055743] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:34.563 [2024-11-05 04:26:48.056843] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:34.563 [2024-11-05 04:26:48.057642] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:34.564 [2024-11-05 04:26:48.057760] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:34.564 04:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:34.564 04:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:15:34.564 04:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:35.506 04:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:35.767 04:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:35.767 04:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:35.767 04:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:35.767 04:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:35.767 04:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:36.028 Malloc1 00:15:36.028 04:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:36.289 04:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:36.289 04:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:36.549 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:36.549 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:36.549 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:36.810 Malloc2 00:15:36.810 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:36.810 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:37.071 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2951327 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2951327 ']' 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2951327 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2951327 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2951327' 00:15:37.332 killing process with pid 2951327 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2951327 00:15:37.332 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2951327 00:15:37.593 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:37.593 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:37.593 00:15:37.593 real 0m51.357s 00:15:37.593 user 3m19.299s 00:15:37.593 sys 0m2.656s 00:15:37.593 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:37.593 04:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:37.593 ************************************ 00:15:37.593 END TEST nvmf_vfio_user 00:15:37.593 ************************************ 00:15:37.593 04:26:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:37.593 04:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:37.593 04:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:37.593 04:26:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:37.593 ************************************ 00:15:37.593 START TEST nvmf_vfio_user_nvme_compliance 00:15:37.593 ************************************ 00:15:37.593 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:37.593 * Looking for test storage... 00:15:37.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:37.593 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:37.593 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:15:37.593 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.855 --rc genhtml_branch_coverage=1 00:15:37.855 --rc genhtml_function_coverage=1 00:15:37.855 --rc genhtml_legend=1 00:15:37.855 --rc geninfo_all_blocks=1 00:15:37.855 --rc geninfo_unexecuted_blocks=1 00:15:37.855 00:15:37.855 ' 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.855 --rc genhtml_branch_coverage=1 00:15:37.855 --rc genhtml_function_coverage=1 00:15:37.855 --rc genhtml_legend=1 00:15:37.855 --rc geninfo_all_blocks=1 00:15:37.855 --rc geninfo_unexecuted_blocks=1 00:15:37.855 00:15:37.855 ' 00:15:37.855 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:37.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.855 --rc genhtml_branch_coverage=1 00:15:37.856 --rc genhtml_function_coverage=1 00:15:37.856 --rc genhtml_legend=1 00:15:37.856 --rc geninfo_all_blocks=1 00:15:37.856 --rc geninfo_unexecuted_blocks=1 00:15:37.856 00:15:37.856 ' 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:37.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:37.856 --rc genhtml_branch_coverage=1 00:15:37.856 --rc genhtml_function_coverage=1 00:15:37.856 --rc genhtml_legend=1 00:15:37.856 --rc geninfo_all_blocks=1 00:15:37.856 --rc geninfo_unexecuted_blocks=1 00:15:37.856 00:15:37.856 ' 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:37.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2952076 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2952076' 00:15:37.856 Process pid: 2952076 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2952076 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 2952076 ']' 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:37.856 04:26:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:37.856 [2024-11-05 04:26:51.362246] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:15:37.856 [2024-11-05 04:26:51.362299] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.856 [2024-11-05 04:26:51.434737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:37.856 [2024-11-05 04:26:51.469785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:37.856 [2024-11-05 04:26:51.469821] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:37.856 [2024-11-05 04:26:51.469830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:37.856 [2024-11-05 04:26:51.469836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:37.856 [2024-11-05 04:26:51.469842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:37.856 [2024-11-05 04:26:51.471420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.856 [2024-11-05 04:26:51.471304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.856 [2024-11-05 04:26:51.471417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:38.799 04:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:38.799 04:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:15:38.799 04:26:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.742 malloc0 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.742 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.743 04:26:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:39.743 00:15:39.743 00:15:39.743 CUnit - A unit testing framework for C - Version 2.1-3 00:15:39.743 http://cunit.sourceforge.net/ 00:15:39.743 00:15:39.743 00:15:39.743 Suite: nvme_compliance 00:15:40.004 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-05 04:26:53.423221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.004 [2024-11-05 04:26:53.424569] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:40.004 [2024-11-05 04:26:53.424581] vfio_user.c:5507:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:40.004 [2024-11-05 04:26:53.424585] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:40.004 [2024-11-05 04:26:53.426237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.004 passed 00:15:40.004 Test: admin_identify_ctrlr_verify_fused ...[2024-11-05 04:26:53.518794] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.004 [2024-11-05 04:26:53.521817] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.004 passed 00:15:40.004 Test: admin_identify_ns ...[2024-11-05 04:26:53.617994] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.264 [2024-11-05 04:26:53.677759] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:40.264 [2024-11-05 04:26:53.685760] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:40.264 [2024-11-05 04:26:53.706884] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.265 passed 00:15:40.265 Test: admin_get_features_mandatory_features ...[2024-11-05 04:26:53.800866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.265 [2024-11-05 04:26:53.803880] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.265 passed 00:15:40.265 Test: admin_get_features_optional_features ...[2024-11-05 04:26:53.897403] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.265 [2024-11-05 04:26:53.900420] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.525 passed 00:15:40.525 Test: admin_set_features_number_of_queues ...[2024-11-05 04:26:53.994503] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.525 [2024-11-05 04:26:54.098851] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.525 passed 00:15:40.785 Test: admin_get_log_page_mandatory_logs ...[2024-11-05 04:26:54.191525] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.785 [2024-11-05 04:26:54.194547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.785 passed 00:15:40.785 Test: admin_get_log_page_with_lpo ...[2024-11-05 04:26:54.286003] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:40.785 [2024-11-05 04:26:54.357768] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:40.786 [2024-11-05 04:26:54.370825] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:40.786 passed 00:15:41.046 Test: fabric_property_get ...[2024-11-05 04:26:54.460457] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.046 [2024-11-05 04:26:54.461706] vfio_user.c:5600:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:41.046 [2024-11-05 04:26:54.463478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.046 passed 00:15:41.046 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-05 04:26:54.558083] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.046 [2024-11-05 04:26:54.559337] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:41.046 [2024-11-05 04:26:54.561100] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.046 passed 00:15:41.046 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-05 04:26:54.655001] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.306 [2024-11-05 04:26:54.738758] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.307 [2024-11-05 04:26:54.754754] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.307 [2024-11-05 04:26:54.759833] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.307 passed 00:15:41.307 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-05 04:26:54.851858] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.307 [2024-11-05 04:26:54.853100] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:41.307 [2024-11-05 04:26:54.854875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.307 passed 00:15:41.567 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-05 04:26:54.950033] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.567 [2024-11-05 04:26:55.026753] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:41.567 [2024-11-05 04:26:55.050754] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:41.567 [2024-11-05 04:26:55.055839] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.567 passed 00:15:41.567 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-05 04:26:55.147470] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.567 [2024-11-05 04:26:55.148708] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:41.567 [2024-11-05 04:26:55.148732] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:41.567 [2024-11-05 04:26:55.150487] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.567 passed 00:15:41.828 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-05 04:26:55.241996] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.828 [2024-11-05 04:26:55.337754] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:41.828 [2024-11-05 04:26:55.345761] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:41.828 [2024-11-05 04:26:55.353756] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:41.828 [2024-11-05 04:26:55.361761] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:41.828 [2024-11-05 04:26:55.390838] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.828 passed 00:15:42.089 Test: admin_create_io_sq_verify_pc ...[2024-11-05 04:26:55.480449] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.089 [2024-11-05 04:26:55.495761] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:42.089 [2024-11-05 04:26:55.513595] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.089 passed 00:15:42.089 Test: admin_create_io_qp_max_qps ...[2024-11-05 04:26:55.609123] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.476 [2024-11-05 04:26:56.699759] nvme_ctrlr.c:5487:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:43.476 [2024-11-05 04:26:57.075401] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.476 passed 00:15:43.737 Test: admin_create_io_sq_shared_cq ...[2024-11-05 04:26:57.167993] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.737 [2024-11-05 04:26:57.298754] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:43.737 [2024-11-05 04:26:57.335823] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.998 passed 00:15:43.998 00:15:43.998 Run Summary: Type Total Ran Passed Failed Inactive 00:15:43.998 suites 1 1 n/a 0 0 00:15:43.998 tests 18 18 18 0 0 00:15:43.998 asserts 360 360 360 0 n/a 00:15:43.998 00:15:43.998 Elapsed time = 1.640 seconds 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2952076 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 2952076 ']' 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 2952076 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2952076 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2952076' 00:15:43.998 killing process with pid 2952076 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 2952076 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 2952076 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:43.998 00:15:43.998 real 0m6.525s 00:15:43.998 user 0m18.527s 00:15:43.998 sys 0m0.520s 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:43.998 ************************************ 00:15:43.998 END TEST nvmf_vfio_user_nvme_compliance 00:15:43.998 ************************************ 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:43.998 04:26:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:44.260 ************************************ 00:15:44.260 START TEST nvmf_vfio_user_fuzz 00:15:44.260 ************************************ 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:44.260 * Looking for test storage... 00:15:44.260 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:44.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.260 --rc genhtml_branch_coverage=1 00:15:44.260 --rc genhtml_function_coverage=1 00:15:44.260 --rc genhtml_legend=1 00:15:44.260 --rc geninfo_all_blocks=1 00:15:44.260 --rc geninfo_unexecuted_blocks=1 00:15:44.260 00:15:44.260 ' 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:44.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.260 --rc genhtml_branch_coverage=1 00:15:44.260 --rc genhtml_function_coverage=1 00:15:44.260 --rc genhtml_legend=1 00:15:44.260 --rc geninfo_all_blocks=1 00:15:44.260 --rc geninfo_unexecuted_blocks=1 00:15:44.260 00:15:44.260 ' 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:44.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.260 --rc genhtml_branch_coverage=1 00:15:44.260 --rc genhtml_function_coverage=1 00:15:44.260 --rc genhtml_legend=1 00:15:44.260 --rc geninfo_all_blocks=1 00:15:44.260 --rc geninfo_unexecuted_blocks=1 00:15:44.260 00:15:44.260 ' 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:44.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:44.260 --rc genhtml_branch_coverage=1 00:15:44.260 --rc genhtml_function_coverage=1 00:15:44.260 --rc genhtml_legend=1 00:15:44.260 --rc geninfo_all_blocks=1 00:15:44.260 --rc geninfo_unexecuted_blocks=1 00:15:44.260 00:15:44.260 ' 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:44.260 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:44.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2953476 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2953476' 00:15:44.261 Process pid: 2953476 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:44.261 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2953476 00:15:44.522 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 2953476 ']' 00:15:44.522 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.522 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:44.522 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.522 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:44.522 04:26:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:45.465 04:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:45.465 04:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:15:45.465 04:26:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 malloc0 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:46.408 04:26:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:18.527 Fuzzing completed. Shutting down the fuzz application 00:16:18.527 00:16:18.527 Dumping successful admin opcodes: 00:16:18.527 8, 9, 10, 24, 00:16:18.527 Dumping successful io opcodes: 00:16:18.527 0, 00:16:18.527 NS: 0x20000081ef00 I/O qp, Total commands completed: 1095595, total successful commands: 4319, random_seed: 825560832 00:16:18.527 NS: 0x20000081ef00 admin qp, Total commands completed: 137758, total successful commands: 1117, random_seed: 3579869312 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2953476 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 2953476 ']' 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 2953476 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2953476 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2953476' 00:16:18.527 killing process with pid 2953476 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 2953476 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 2953476 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:18.527 00:16:18.527 real 0m33.762s 00:16:18.527 user 0m38.199s 00:16:18.527 sys 0m25.312s 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:18.527 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:18.527 ************************************ 00:16:18.527 END TEST nvmf_vfio_user_fuzz 00:16:18.527 ************************************ 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:18.528 ************************************ 00:16:18.528 START TEST nvmf_auth_target 00:16:18.528 ************************************ 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:18.528 * Looking for test storage... 00:16:18.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:18.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.528 --rc genhtml_branch_coverage=1 00:16:18.528 --rc genhtml_function_coverage=1 00:16:18.528 --rc genhtml_legend=1 00:16:18.528 --rc geninfo_all_blocks=1 00:16:18.528 --rc geninfo_unexecuted_blocks=1 00:16:18.528 00:16:18.528 ' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:18.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.528 --rc genhtml_branch_coverage=1 00:16:18.528 --rc genhtml_function_coverage=1 00:16:18.528 --rc genhtml_legend=1 00:16:18.528 --rc geninfo_all_blocks=1 00:16:18.528 --rc geninfo_unexecuted_blocks=1 00:16:18.528 00:16:18.528 ' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:18.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.528 --rc genhtml_branch_coverage=1 00:16:18.528 --rc genhtml_function_coverage=1 00:16:18.528 --rc genhtml_legend=1 00:16:18.528 --rc geninfo_all_blocks=1 00:16:18.528 --rc geninfo_unexecuted_blocks=1 00:16:18.528 00:16:18.528 ' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:18.528 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:18.528 --rc genhtml_branch_coverage=1 00:16:18.528 --rc genhtml_function_coverage=1 00:16:18.528 --rc genhtml_legend=1 00:16:18.528 --rc geninfo_all_blocks=1 00:16:18.528 --rc geninfo_unexecuted_blocks=1 00:16:18.528 00:16:18.528 ' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:18.528 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:18.529 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:16:18.529 04:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:25.122 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:25.122 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.122 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:25.122 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:25.123 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.123 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.384 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.384 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.384 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:25.384 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.384 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.384 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.384 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:25.384 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:25.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:16:25.384 00:16:25.384 --- 10.0.0.2 ping statistics --- 00:16:25.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.384 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:16:25.384 04:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.261 ms 00:16:25.384 00:16:25.384 --- 10.0.0.1 ping statistics --- 00:16:25.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.384 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:16:25.384 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.384 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:25.384 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:25.384 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.384 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:25.384 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:25.384 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.384 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:25.384 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2964331 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2964331 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2964331 ']' 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:25.645 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2964388 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a892f45b769dd6eff14b636019bce2fb7db0828ea328ae28 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.wiJ 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a892f45b769dd6eff14b636019bce2fb7db0828ea328ae28 0 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a892f45b769dd6eff14b636019bce2fb7db0828ea328ae28 0 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a892f45b769dd6eff14b636019bce2fb7db0828ea328ae28 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:26.590 04:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.wiJ 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.wiJ 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.wiJ 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a2e92e61f548c2ab9f783f35c751933a05c50aa3012554049192bbce3d4d05d5 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.1gw 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a2e92e61f548c2ab9f783f35c751933a05c50aa3012554049192bbce3d4d05d5 3 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a2e92e61f548c2ab9f783f35c751933a05c50aa3012554049192bbce3d4d05d5 3 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a2e92e61f548c2ab9f783f35c751933a05c50aa3012554049192bbce3d4d05d5 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.1gw 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.1gw 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.1gw 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=c4adc84ccd99a953d87385b8b663fcda 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.NNa 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key c4adc84ccd99a953d87385b8b663fcda 1 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 c4adc84ccd99a953d87385b8b663fcda 1 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=c4adc84ccd99a953d87385b8b663fcda 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.NNa 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.NNa 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.NNa 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f845437854a4b5c6fd5ba01dfd1f3f8a3b78f7c65eee5570 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.pqN 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f845437854a4b5c6fd5ba01dfd1f3f8a3b78f7c65eee5570 2 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f845437854a4b5c6fd5ba01dfd1f3f8a3b78f7c65eee5570 2 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f845437854a4b5c6fd5ba01dfd1f3f8a3b78f7c65eee5570 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.pqN 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.pqN 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.pqN 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.590 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.591 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:26.591 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e2a058993f8fdc8a826d2dac91f48d828141000ff6e79389 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.D7Q 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e2a058993f8fdc8a826d2dac91f48d828141000ff6e79389 2 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e2a058993f8fdc8a826d2dac91f48d828141000ff6e79389 2 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e2a058993f8fdc8a826d2dac91f48d828141000ff6e79389 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.D7Q 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.D7Q 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.D7Q 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1a02b2342b07cb8f74028319be0be426 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.XwT 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1a02b2342b07cb8f74028319be0be426 1 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1a02b2342b07cb8f74028319be0be426 1 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1a02b2342b07cb8f74028319be0be426 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.XwT 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.XwT 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.XwT 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=81a89072bf77a405bb90b8b3e63db4874258d377c33ac9181031e4922b81d812 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.zEg 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 81a89072bf77a405bb90b8b3e63db4874258d377c33ac9181031e4922b81d812 3 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 81a89072bf77a405bb90b8b3e63db4874258d377c33ac9181031e4922b81d812 3 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=81a89072bf77a405bb90b8b3e63db4874258d377c33ac9181031e4922b81d812 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.zEg 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.zEg 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.zEg 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2964331 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2964331 ']' 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:26.852 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.114 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:27.114 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:27.114 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2964388 /var/tmp/host.sock 00:16:27.114 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2964388 ']' 00:16:27.114 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:16:27.114 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:27.114 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:27.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:27.114 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:27.114 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wiJ 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.wiJ 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.wiJ 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.1gw ]] 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gw 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gw 00:16:27.375 04:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gw 00:16:27.635 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.635 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NNa 00:16:27.635 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.635 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.636 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.636 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.NNa 00:16:27.636 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.NNa 00:16:27.896 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.pqN ]] 00:16:27.896 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pqN 00:16:27.896 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.896 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pqN 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pqN 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.D7Q 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.D7Q 00:16:27.897 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.D7Q 00:16:28.158 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.XwT ]] 00:16:28.158 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XwT 00:16:28.158 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.158 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.158 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.158 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XwT 00:16:28.158 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XwT 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zEg 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.zEg 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.zEg 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.420 04:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.682 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.944 00:16:28.944 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.944 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.944 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.205 { 00:16:29.205 "cntlid": 1, 00:16:29.205 "qid": 0, 00:16:29.205 "state": "enabled", 00:16:29.205 "thread": "nvmf_tgt_poll_group_000", 00:16:29.205 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:29.205 "listen_address": { 00:16:29.205 "trtype": "TCP", 00:16:29.205 "adrfam": "IPv4", 00:16:29.205 "traddr": "10.0.0.2", 00:16:29.205 "trsvcid": "4420" 00:16:29.205 }, 00:16:29.205 "peer_address": { 00:16:29.205 "trtype": "TCP", 00:16:29.205 "adrfam": "IPv4", 00:16:29.205 "traddr": "10.0.0.1", 00:16:29.205 "trsvcid": "57720" 00:16:29.205 }, 00:16:29.205 "auth": { 00:16:29.205 "state": "completed", 00:16:29.205 "digest": "sha256", 00:16:29.205 "dhgroup": "null" 00:16:29.205 } 00:16:29.205 } 00:16:29.205 ]' 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.205 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.466 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:29.466 04:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:30.038 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.300 04:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.561 00:16:30.561 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.561 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.561 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.822 { 00:16:30.822 "cntlid": 3, 00:16:30.822 "qid": 0, 00:16:30.822 "state": "enabled", 00:16:30.822 "thread": "nvmf_tgt_poll_group_000", 00:16:30.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:30.822 "listen_address": { 00:16:30.822 "trtype": "TCP", 00:16:30.822 "adrfam": "IPv4", 00:16:30.822 "traddr": "10.0.0.2", 00:16:30.822 "trsvcid": "4420" 00:16:30.822 }, 00:16:30.822 "peer_address": { 00:16:30.822 "trtype": "TCP", 00:16:30.822 "adrfam": "IPv4", 00:16:30.822 "traddr": "10.0.0.1", 00:16:30.822 "trsvcid": "57734" 00:16:30.822 }, 00:16:30.822 "auth": { 00:16:30.822 "state": "completed", 00:16:30.822 "digest": "sha256", 00:16:30.822 "dhgroup": "null" 00:16:30.822 } 00:16:30.822 } 00:16:30.822 ]' 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.822 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.083 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:31.083 04:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.030 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.292 00:16:32.292 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.292 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.292 04:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.553 { 00:16:32.553 "cntlid": 5, 00:16:32.553 "qid": 0, 00:16:32.553 "state": "enabled", 00:16:32.553 "thread": "nvmf_tgt_poll_group_000", 00:16:32.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:32.553 "listen_address": { 00:16:32.553 "trtype": "TCP", 00:16:32.553 "adrfam": "IPv4", 00:16:32.553 "traddr": "10.0.0.2", 00:16:32.553 "trsvcid": "4420" 00:16:32.553 }, 00:16:32.553 "peer_address": { 00:16:32.553 "trtype": "TCP", 00:16:32.553 "adrfam": "IPv4", 00:16:32.553 "traddr": "10.0.0.1", 00:16:32.553 "trsvcid": "57774" 00:16:32.553 }, 00:16:32.553 "auth": { 00:16:32.553 "state": "completed", 00:16:32.553 "digest": "sha256", 00:16:32.553 "dhgroup": "null" 00:16:32.553 } 00:16:32.553 } 00:16:32.553 ]' 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.553 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.814 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:16:32.814 04:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.755 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.017 00:16:34.017 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.017 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.017 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.278 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.278 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.278 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.278 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.278 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.278 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.278 { 00:16:34.278 "cntlid": 7, 00:16:34.278 "qid": 0, 00:16:34.278 "state": "enabled", 00:16:34.278 "thread": "nvmf_tgt_poll_group_000", 00:16:34.279 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:34.279 "listen_address": { 00:16:34.279 "trtype": "TCP", 00:16:34.279 "adrfam": "IPv4", 00:16:34.279 "traddr": "10.0.0.2", 00:16:34.279 "trsvcid": "4420" 00:16:34.279 }, 00:16:34.279 "peer_address": { 00:16:34.279 "trtype": "TCP", 00:16:34.279 "adrfam": "IPv4", 00:16:34.279 "traddr": "10.0.0.1", 00:16:34.279 "trsvcid": "57804" 00:16:34.279 }, 00:16:34.279 "auth": { 00:16:34.279 "state": "completed", 00:16:34.279 "digest": "sha256", 00:16:34.279 "dhgroup": "null" 00:16:34.279 } 00:16:34.279 } 00:16:34.279 ]' 00:16:34.279 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.279 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.279 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.279 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:34.279 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.279 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.279 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.279 04:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.540 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:16:34.540 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.482 04:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.743 00:16:35.744 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:35.744 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:35.744 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.005 { 00:16:36.005 "cntlid": 9, 00:16:36.005 "qid": 0, 00:16:36.005 "state": "enabled", 00:16:36.005 "thread": "nvmf_tgt_poll_group_000", 00:16:36.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:36.005 "listen_address": { 00:16:36.005 "trtype": "TCP", 00:16:36.005 "adrfam": "IPv4", 00:16:36.005 "traddr": "10.0.0.2", 00:16:36.005 "trsvcid": "4420" 00:16:36.005 }, 00:16:36.005 "peer_address": { 00:16:36.005 "trtype": "TCP", 00:16:36.005 "adrfam": "IPv4", 00:16:36.005 "traddr": "10.0.0.1", 00:16:36.005 "trsvcid": "34070" 00:16:36.005 }, 00:16:36.005 "auth": { 00:16:36.005 "state": "completed", 00:16:36.005 "digest": "sha256", 00:16:36.005 "dhgroup": "ffdhe2048" 00:16:36.005 } 00:16:36.005 } 00:16:36.005 ]' 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.005 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.266 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:36.266 04:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.209 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.472 00:16:37.472 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.472 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.472 04:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:37.472 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.472 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:37.472 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.472 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:37.733 { 00:16:37.733 "cntlid": 11, 00:16:37.733 "qid": 0, 00:16:37.733 "state": "enabled", 00:16:37.733 "thread": "nvmf_tgt_poll_group_000", 00:16:37.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:37.733 "listen_address": { 00:16:37.733 "trtype": "TCP", 00:16:37.733 "adrfam": "IPv4", 00:16:37.733 "traddr": "10.0.0.2", 00:16:37.733 "trsvcid": "4420" 00:16:37.733 }, 00:16:37.733 "peer_address": { 00:16:37.733 "trtype": "TCP", 00:16:37.733 "adrfam": "IPv4", 00:16:37.733 "traddr": "10.0.0.1", 00:16:37.733 "trsvcid": "34092" 00:16:37.733 }, 00:16:37.733 "auth": { 00:16:37.733 "state": "completed", 00:16:37.733 "digest": "sha256", 00:16:37.733 "dhgroup": "ffdhe2048" 00:16:37.733 } 00:16:37.733 } 00:16:37.733 ]' 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.733 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.994 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:37.994 04:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:38.566 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.566 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:38.566 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.566 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.827 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.089 00:16:39.089 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.089 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.089 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.351 { 00:16:39.351 "cntlid": 13, 00:16:39.351 "qid": 0, 00:16:39.351 "state": "enabled", 00:16:39.351 "thread": "nvmf_tgt_poll_group_000", 00:16:39.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:39.351 "listen_address": { 00:16:39.351 "trtype": "TCP", 00:16:39.351 "adrfam": "IPv4", 00:16:39.351 "traddr": "10.0.0.2", 00:16:39.351 "trsvcid": "4420" 00:16:39.351 }, 00:16:39.351 "peer_address": { 00:16:39.351 "trtype": "TCP", 00:16:39.351 "adrfam": "IPv4", 00:16:39.351 "traddr": "10.0.0.1", 00:16:39.351 "trsvcid": "34120" 00:16:39.351 }, 00:16:39.351 "auth": { 00:16:39.351 "state": "completed", 00:16:39.351 "digest": "sha256", 00:16:39.351 "dhgroup": "ffdhe2048" 00:16:39.351 } 00:16:39.351 } 00:16:39.351 ]' 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.351 04:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.612 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:16:39.612 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:16:40.554 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.555 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.555 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.555 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.555 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.555 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.555 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.555 04:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.555 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.816 00:16:40.816 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.816 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.816 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.078 { 00:16:41.078 "cntlid": 15, 00:16:41.078 "qid": 0, 00:16:41.078 "state": "enabled", 00:16:41.078 "thread": "nvmf_tgt_poll_group_000", 00:16:41.078 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:41.078 "listen_address": { 00:16:41.078 "trtype": "TCP", 00:16:41.078 "adrfam": "IPv4", 00:16:41.078 "traddr": "10.0.0.2", 00:16:41.078 "trsvcid": "4420" 00:16:41.078 }, 00:16:41.078 "peer_address": { 00:16:41.078 "trtype": "TCP", 00:16:41.078 "adrfam": "IPv4", 00:16:41.078 "traddr": "10.0.0.1", 00:16:41.078 "trsvcid": "34142" 00:16:41.078 }, 00:16:41.078 "auth": { 00:16:41.078 "state": "completed", 00:16:41.078 "digest": "sha256", 00:16:41.078 "dhgroup": "ffdhe2048" 00:16:41.078 } 00:16:41.078 } 00:16:41.078 ]' 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.078 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.340 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:16:41.340 04:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:16:42.283 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.283 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:42.283 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.283 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.283 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.283 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.283 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.283 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.284 04:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.545 00:16:42.545 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.545 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.545 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.807 { 00:16:42.807 "cntlid": 17, 00:16:42.807 "qid": 0, 00:16:42.807 "state": "enabled", 00:16:42.807 "thread": "nvmf_tgt_poll_group_000", 00:16:42.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:42.807 "listen_address": { 00:16:42.807 "trtype": "TCP", 00:16:42.807 "adrfam": "IPv4", 00:16:42.807 "traddr": "10.0.0.2", 00:16:42.807 "trsvcid": "4420" 00:16:42.807 }, 00:16:42.807 "peer_address": { 00:16:42.807 "trtype": "TCP", 00:16:42.807 "adrfam": "IPv4", 00:16:42.807 "traddr": "10.0.0.1", 00:16:42.807 "trsvcid": "34176" 00:16:42.807 }, 00:16:42.807 "auth": { 00:16:42.807 "state": "completed", 00:16:42.807 "digest": "sha256", 00:16:42.807 "dhgroup": "ffdhe3072" 00:16:42.807 } 00:16:42.807 } 00:16:42.807 ]' 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.807 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.069 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:43.069 04:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:43.640 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:43.901 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.163 00:16:44.163 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.163 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.163 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.425 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.425 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.425 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.425 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.425 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.425 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.425 { 00:16:44.425 "cntlid": 19, 00:16:44.425 "qid": 0, 00:16:44.425 "state": "enabled", 00:16:44.425 "thread": "nvmf_tgt_poll_group_000", 00:16:44.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:44.425 "listen_address": { 00:16:44.425 "trtype": "TCP", 00:16:44.425 "adrfam": "IPv4", 00:16:44.425 "traddr": "10.0.0.2", 00:16:44.425 "trsvcid": "4420" 00:16:44.425 }, 00:16:44.425 "peer_address": { 00:16:44.425 "trtype": "TCP", 00:16:44.425 "adrfam": "IPv4", 00:16:44.425 "traddr": "10.0.0.1", 00:16:44.425 "trsvcid": "34202" 00:16:44.425 }, 00:16:44.425 "auth": { 00:16:44.425 "state": "completed", 00:16:44.425 "digest": "sha256", 00:16:44.425 "dhgroup": "ffdhe3072" 00:16:44.425 } 00:16:44.425 } 00:16:44.425 ]' 00:16:44.425 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.425 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.425 04:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.425 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.425 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.685 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.685 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.685 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.685 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:44.685 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:45.628 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.628 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:45.628 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.628 04:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.628 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.890 00:16:45.890 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.890 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.890 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.168 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.168 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.168 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.168 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.168 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.168 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.168 { 00:16:46.168 "cntlid": 21, 00:16:46.168 "qid": 0, 00:16:46.168 "state": "enabled", 00:16:46.168 "thread": "nvmf_tgt_poll_group_000", 00:16:46.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:46.168 "listen_address": { 00:16:46.168 "trtype": "TCP", 00:16:46.168 "adrfam": "IPv4", 00:16:46.168 "traddr": "10.0.0.2", 00:16:46.168 "trsvcid": "4420" 00:16:46.168 }, 00:16:46.169 "peer_address": { 00:16:46.169 "trtype": "TCP", 00:16:46.169 "adrfam": "IPv4", 00:16:46.169 "traddr": "10.0.0.1", 00:16:46.169 "trsvcid": "38772" 00:16:46.169 }, 00:16:46.169 "auth": { 00:16:46.169 "state": "completed", 00:16:46.169 "digest": "sha256", 00:16:46.169 "dhgroup": "ffdhe3072" 00:16:46.169 } 00:16:46.169 } 00:16:46.169 ]' 00:16:46.169 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.169 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.169 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.169 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:46.169 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.169 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.169 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.169 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.442 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:16:46.442 04:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:16:47.127 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.127 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.127 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.127 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.127 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.127 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.127 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.127 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.468 04:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.734 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.734 { 00:16:47.734 "cntlid": 23, 00:16:47.734 "qid": 0, 00:16:47.734 "state": "enabled", 00:16:47.734 "thread": "nvmf_tgt_poll_group_000", 00:16:47.734 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:47.734 "listen_address": { 00:16:47.734 "trtype": "TCP", 00:16:47.734 "adrfam": "IPv4", 00:16:47.734 "traddr": "10.0.0.2", 00:16:47.734 "trsvcid": "4420" 00:16:47.734 }, 00:16:47.734 "peer_address": { 00:16:47.734 "trtype": "TCP", 00:16:47.734 "adrfam": "IPv4", 00:16:47.734 "traddr": "10.0.0.1", 00:16:47.734 "trsvcid": "38804" 00:16:47.734 }, 00:16:47.734 "auth": { 00:16:47.734 "state": "completed", 00:16:47.734 "digest": "sha256", 00:16:47.734 "dhgroup": "ffdhe3072" 00:16:47.734 } 00:16:47.734 } 00:16:47.734 ]' 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.734 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.996 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:47.996 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.996 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.996 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.996 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.996 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:16:47.996 04:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.940 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.940 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.201 00:16:49.201 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.201 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.201 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.462 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.462 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.462 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.462 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.462 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.462 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.462 { 00:16:49.462 "cntlid": 25, 00:16:49.462 "qid": 0, 00:16:49.462 "state": "enabled", 00:16:49.462 "thread": "nvmf_tgt_poll_group_000", 00:16:49.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:49.462 "listen_address": { 00:16:49.462 "trtype": "TCP", 00:16:49.462 "adrfam": "IPv4", 00:16:49.462 "traddr": "10.0.0.2", 00:16:49.462 "trsvcid": "4420" 00:16:49.462 }, 00:16:49.462 "peer_address": { 00:16:49.462 "trtype": "TCP", 00:16:49.462 "adrfam": "IPv4", 00:16:49.462 "traddr": "10.0.0.1", 00:16:49.462 "trsvcid": "38822" 00:16:49.462 }, 00:16:49.462 "auth": { 00:16:49.462 "state": "completed", 00:16:49.462 "digest": "sha256", 00:16:49.462 "dhgroup": "ffdhe4096" 00:16:49.462 } 00:16:49.462 } 00:16:49.462 ]' 00:16:49.462 04:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.463 04:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.463 04:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.463 04:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.463 04:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.723 04:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.723 04:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.723 04:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.723 04:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:49.723 04:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.665 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.927 00:16:50.927 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.927 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.927 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.188 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.188 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.188 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.188 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.188 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.188 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.188 { 00:16:51.188 "cntlid": 27, 00:16:51.188 "qid": 0, 00:16:51.188 "state": "enabled", 00:16:51.188 "thread": "nvmf_tgt_poll_group_000", 00:16:51.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:51.188 "listen_address": { 00:16:51.188 "trtype": "TCP", 00:16:51.188 "adrfam": "IPv4", 00:16:51.188 "traddr": "10.0.0.2", 00:16:51.188 "trsvcid": "4420" 00:16:51.188 }, 00:16:51.188 "peer_address": { 00:16:51.188 "trtype": "TCP", 00:16:51.188 "adrfam": "IPv4", 00:16:51.188 "traddr": "10.0.0.1", 00:16:51.188 "trsvcid": "38848" 00:16:51.188 }, 00:16:51.188 "auth": { 00:16:51.188 "state": "completed", 00:16:51.188 "digest": "sha256", 00:16:51.188 "dhgroup": "ffdhe4096" 00:16:51.188 } 00:16:51.188 } 00:16:51.188 ]' 00:16:51.188 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.188 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.188 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.449 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:51.449 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.449 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.449 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.449 04:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.449 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:51.449 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.390 04:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.651 00:16:52.651 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.651 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.651 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.912 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.912 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.912 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.912 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.912 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.912 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.912 { 00:16:52.912 "cntlid": 29, 00:16:52.912 "qid": 0, 00:16:52.912 "state": "enabled", 00:16:52.912 "thread": "nvmf_tgt_poll_group_000", 00:16:52.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:52.912 "listen_address": { 00:16:52.912 "trtype": "TCP", 00:16:52.912 "adrfam": "IPv4", 00:16:52.912 "traddr": "10.0.0.2", 00:16:52.912 "trsvcid": "4420" 00:16:52.912 }, 00:16:52.912 "peer_address": { 00:16:52.912 "trtype": "TCP", 00:16:52.912 "adrfam": "IPv4", 00:16:52.912 "traddr": "10.0.0.1", 00:16:52.912 "trsvcid": "38860" 00:16:52.912 }, 00:16:52.912 "auth": { 00:16:52.912 "state": "completed", 00:16:52.912 "digest": "sha256", 00:16:52.912 "dhgroup": "ffdhe4096" 00:16:52.912 } 00:16:52.912 } 00:16:52.912 ]' 00:16:52.912 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.912 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.912 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.173 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.173 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.173 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.173 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.173 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.173 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:16:53.173 04:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.115 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.115 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.376 00:16:54.376 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.376 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.376 04:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.637 { 00:16:54.637 "cntlid": 31, 00:16:54.637 "qid": 0, 00:16:54.637 "state": "enabled", 00:16:54.637 "thread": "nvmf_tgt_poll_group_000", 00:16:54.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:54.637 "listen_address": { 00:16:54.637 "trtype": "TCP", 00:16:54.637 "adrfam": "IPv4", 00:16:54.637 "traddr": "10.0.0.2", 00:16:54.637 "trsvcid": "4420" 00:16:54.637 }, 00:16:54.637 "peer_address": { 00:16:54.637 "trtype": "TCP", 00:16:54.637 "adrfam": "IPv4", 00:16:54.637 "traddr": "10.0.0.1", 00:16:54.637 "trsvcid": "38880" 00:16:54.637 }, 00:16:54.637 "auth": { 00:16:54.637 "state": "completed", 00:16:54.637 "digest": "sha256", 00:16:54.637 "dhgroup": "ffdhe4096" 00:16:54.637 } 00:16:54.637 } 00:16:54.637 ]' 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:54.637 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.910 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.910 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.910 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.910 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:16:54.910 04:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.857 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:56.117 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.378 { 00:16:56.378 "cntlid": 33, 00:16:56.378 "qid": 0, 00:16:56.378 "state": "enabled", 00:16:56.378 "thread": "nvmf_tgt_poll_group_000", 00:16:56.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:56.378 "listen_address": { 00:16:56.378 "trtype": "TCP", 00:16:56.378 "adrfam": "IPv4", 00:16:56.378 "traddr": "10.0.0.2", 00:16:56.378 "trsvcid": "4420" 00:16:56.378 }, 00:16:56.378 "peer_address": { 00:16:56.378 "trtype": "TCP", 00:16:56.378 "adrfam": "IPv4", 00:16:56.378 "traddr": "10.0.0.1", 00:16:56.378 "trsvcid": "43164" 00:16:56.378 }, 00:16:56.378 "auth": { 00:16:56.378 "state": "completed", 00:16:56.378 "digest": "sha256", 00:16:56.378 "dhgroup": "ffdhe6144" 00:16:56.378 } 00:16:56.378 } 00:16:56.378 ]' 00:16:56.378 04:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.378 04:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.378 04:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.640 04:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.640 04:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.640 04:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.640 04:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.640 04:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.640 04:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:56.640 04:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.581 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.842 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.842 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.842 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.842 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.103 00:16:58.103 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:58.103 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:58.103 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:58.363 { 00:16:58.363 "cntlid": 35, 00:16:58.363 "qid": 0, 00:16:58.363 "state": "enabled", 00:16:58.363 "thread": "nvmf_tgt_poll_group_000", 00:16:58.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:16:58.363 "listen_address": { 00:16:58.363 "trtype": "TCP", 00:16:58.363 "adrfam": "IPv4", 00:16:58.363 "traddr": "10.0.0.2", 00:16:58.363 "trsvcid": "4420" 00:16:58.363 }, 00:16:58.363 "peer_address": { 00:16:58.363 "trtype": "TCP", 00:16:58.363 "adrfam": "IPv4", 00:16:58.363 "traddr": "10.0.0.1", 00:16:58.363 "trsvcid": "43196" 00:16:58.363 }, 00:16:58.363 "auth": { 00:16:58.363 "state": "completed", 00:16:58.363 "digest": "sha256", 00:16:58.363 "dhgroup": "ffdhe6144" 00:16:58.363 } 00:16:58.363 } 00:16:58.363 ]' 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.363 04:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.624 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:58.624 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:16:59.566 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:59.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:59.566 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:59.566 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.566 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.566 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:59.566 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.566 04:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.566 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.827 00:16:59.827 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.827 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.827 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:00.088 { 00:17:00.088 "cntlid": 37, 00:17:00.088 "qid": 0, 00:17:00.088 "state": "enabled", 00:17:00.088 "thread": "nvmf_tgt_poll_group_000", 00:17:00.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:00.088 "listen_address": { 00:17:00.088 "trtype": "TCP", 00:17:00.088 "adrfam": "IPv4", 00:17:00.088 "traddr": "10.0.0.2", 00:17:00.088 "trsvcid": "4420" 00:17:00.088 }, 00:17:00.088 "peer_address": { 00:17:00.088 "trtype": "TCP", 00:17:00.088 "adrfam": "IPv4", 00:17:00.088 "traddr": "10.0.0.1", 00:17:00.088 "trsvcid": "43220" 00:17:00.088 }, 00:17:00.088 "auth": { 00:17:00.088 "state": "completed", 00:17:00.088 "digest": "sha256", 00:17:00.088 "dhgroup": "ffdhe6144" 00:17:00.088 } 00:17:00.088 } 00:17:00.088 ]' 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.088 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.349 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:00.349 04:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:01.292 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.292 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:01.292 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.292 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.292 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.292 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:01.292 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.292 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.292 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.293 04:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.865 00:17:01.865 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.865 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.865 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.865 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.865 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.865 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.866 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.866 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.866 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.866 { 00:17:01.866 "cntlid": 39, 00:17:01.866 "qid": 0, 00:17:01.866 "state": "enabled", 00:17:01.866 "thread": "nvmf_tgt_poll_group_000", 00:17:01.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:01.866 "listen_address": { 00:17:01.866 "trtype": "TCP", 00:17:01.866 "adrfam": "IPv4", 00:17:01.866 "traddr": "10.0.0.2", 00:17:01.866 "trsvcid": "4420" 00:17:01.866 }, 00:17:01.866 "peer_address": { 00:17:01.866 "trtype": "TCP", 00:17:01.866 "adrfam": "IPv4", 00:17:01.866 "traddr": "10.0.0.1", 00:17:01.866 "trsvcid": "43252" 00:17:01.866 }, 00:17:01.866 "auth": { 00:17:01.866 "state": "completed", 00:17:01.866 "digest": "sha256", 00:17:01.866 "dhgroup": "ffdhe6144" 00:17:01.866 } 00:17:01.866 } 00:17:01.866 ]' 00:17:01.866 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.866 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.866 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.127 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.127 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.127 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.127 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.127 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.127 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:02.127 04:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:03.070 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.070 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:03.070 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.070 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.070 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.070 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.070 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.070 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.070 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.332 04:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.904 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.904 { 00:17:03.904 "cntlid": 41, 00:17:03.904 "qid": 0, 00:17:03.904 "state": "enabled", 00:17:03.904 "thread": "nvmf_tgt_poll_group_000", 00:17:03.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:03.904 "listen_address": { 00:17:03.904 "trtype": "TCP", 00:17:03.904 "adrfam": "IPv4", 00:17:03.904 "traddr": "10.0.0.2", 00:17:03.904 "trsvcid": "4420" 00:17:03.904 }, 00:17:03.904 "peer_address": { 00:17:03.904 "trtype": "TCP", 00:17:03.904 "adrfam": "IPv4", 00:17:03.904 "traddr": "10.0.0.1", 00:17:03.904 "trsvcid": "43278" 00:17:03.904 }, 00:17:03.904 "auth": { 00:17:03.904 "state": "completed", 00:17:03.904 "digest": "sha256", 00:17:03.904 "dhgroup": "ffdhe8192" 00:17:03.904 } 00:17:03.904 } 00:17:03.904 ]' 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.904 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.165 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:04.165 04:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.108 04:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.679 00:17:05.679 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.679 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.679 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.940 { 00:17:05.940 "cntlid": 43, 00:17:05.940 "qid": 0, 00:17:05.940 "state": "enabled", 00:17:05.940 "thread": "nvmf_tgt_poll_group_000", 00:17:05.940 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:05.940 "listen_address": { 00:17:05.940 "trtype": "TCP", 00:17:05.940 "adrfam": "IPv4", 00:17:05.940 "traddr": "10.0.0.2", 00:17:05.940 "trsvcid": "4420" 00:17:05.940 }, 00:17:05.940 "peer_address": { 00:17:05.940 "trtype": "TCP", 00:17:05.940 "adrfam": "IPv4", 00:17:05.940 "traddr": "10.0.0.1", 00:17:05.940 "trsvcid": "49830" 00:17:05.940 }, 00:17:05.940 "auth": { 00:17:05.940 "state": "completed", 00:17:05.940 "digest": "sha256", 00:17:05.940 "dhgroup": "ffdhe8192" 00:17:05.940 } 00:17:05.940 } 00:17:05.940 ]' 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.940 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.201 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:06.201 04:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:07.143 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:07.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:07.143 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:07.143 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.143 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.143 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.144 04:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.716 00:17:07.716 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.716 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.716 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.976 { 00:17:07.976 "cntlid": 45, 00:17:07.976 "qid": 0, 00:17:07.976 "state": "enabled", 00:17:07.976 "thread": "nvmf_tgt_poll_group_000", 00:17:07.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:07.976 "listen_address": { 00:17:07.976 "trtype": "TCP", 00:17:07.976 "adrfam": "IPv4", 00:17:07.976 "traddr": "10.0.0.2", 00:17:07.976 "trsvcid": "4420" 00:17:07.976 }, 00:17:07.976 "peer_address": { 00:17:07.976 "trtype": "TCP", 00:17:07.976 "adrfam": "IPv4", 00:17:07.976 "traddr": "10.0.0.1", 00:17:07.976 "trsvcid": "49868" 00:17:07.976 }, 00:17:07.976 "auth": { 00:17:07.976 "state": "completed", 00:17:07.976 "digest": "sha256", 00:17:07.976 "dhgroup": "ffdhe8192" 00:17:07.976 } 00:17:07.976 } 00:17:07.976 ]' 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.976 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.237 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:08.237 04:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:09.179 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.179 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:09.179 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.179 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.179 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.179 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:09.179 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.179 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.180 04:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:09.751 00:17:09.751 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.751 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.751 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.012 { 00:17:10.012 "cntlid": 47, 00:17:10.012 "qid": 0, 00:17:10.012 "state": "enabled", 00:17:10.012 "thread": "nvmf_tgt_poll_group_000", 00:17:10.012 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:10.012 "listen_address": { 00:17:10.012 "trtype": "TCP", 00:17:10.012 "adrfam": "IPv4", 00:17:10.012 "traddr": "10.0.0.2", 00:17:10.012 "trsvcid": "4420" 00:17:10.012 }, 00:17:10.012 "peer_address": { 00:17:10.012 "trtype": "TCP", 00:17:10.012 "adrfam": "IPv4", 00:17:10.012 "traddr": "10.0.0.1", 00:17:10.012 "trsvcid": "49890" 00:17:10.012 }, 00:17:10.012 "auth": { 00:17:10.012 "state": "completed", 00:17:10.012 "digest": "sha256", 00:17:10.012 "dhgroup": "ffdhe8192" 00:17:10.012 } 00:17:10.012 } 00:17:10.012 ]' 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.012 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.272 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:10.272 04:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:11.214 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.214 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:11.214 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.215 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:11.475 00:17:11.475 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.475 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.475 04:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.735 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.735 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.735 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.735 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.735 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.735 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:11.735 { 00:17:11.735 "cntlid": 49, 00:17:11.735 "qid": 0, 00:17:11.735 "state": "enabled", 00:17:11.735 "thread": "nvmf_tgt_poll_group_000", 00:17:11.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:11.735 "listen_address": { 00:17:11.735 "trtype": "TCP", 00:17:11.735 "adrfam": "IPv4", 00:17:11.735 "traddr": "10.0.0.2", 00:17:11.735 "trsvcid": "4420" 00:17:11.735 }, 00:17:11.735 "peer_address": { 00:17:11.735 "trtype": "TCP", 00:17:11.735 "adrfam": "IPv4", 00:17:11.735 "traddr": "10.0.0.1", 00:17:11.735 "trsvcid": "49924" 00:17:11.735 }, 00:17:11.735 "auth": { 00:17:11.735 "state": "completed", 00:17:11.735 "digest": "sha384", 00:17:11.735 "dhgroup": "null" 00:17:11.735 } 00:17:11.735 } 00:17:11.736 ]' 00:17:11.736 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:11.736 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.736 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:11.736 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:11.736 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.736 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.736 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.736 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.996 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:11.996 04:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:12.567 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.567 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.567 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:12.567 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.567 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:12.828 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:13.088 00:17:13.089 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.089 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.089 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.349 { 00:17:13.349 "cntlid": 51, 00:17:13.349 "qid": 0, 00:17:13.349 "state": "enabled", 00:17:13.349 "thread": "nvmf_tgt_poll_group_000", 00:17:13.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:13.349 "listen_address": { 00:17:13.349 "trtype": "TCP", 00:17:13.349 "adrfam": "IPv4", 00:17:13.349 "traddr": "10.0.0.2", 00:17:13.349 "trsvcid": "4420" 00:17:13.349 }, 00:17:13.349 "peer_address": { 00:17:13.349 "trtype": "TCP", 00:17:13.349 "adrfam": "IPv4", 00:17:13.349 "traddr": "10.0.0.1", 00:17:13.349 "trsvcid": "49954" 00:17:13.349 }, 00:17:13.349 "auth": { 00:17:13.349 "state": "completed", 00:17:13.349 "digest": "sha384", 00:17:13.349 "dhgroup": "null" 00:17:13.349 } 00:17:13.349 } 00:17:13.349 ]' 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:13.349 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.609 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.609 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.609 04:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.609 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:13.609 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:14.550 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.550 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.550 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:14.550 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.550 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.550 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.550 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.550 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.550 04:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.550 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:14.810 00:17:14.810 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.810 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.810 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.070 { 00:17:15.070 "cntlid": 53, 00:17:15.070 "qid": 0, 00:17:15.070 "state": "enabled", 00:17:15.070 "thread": "nvmf_tgt_poll_group_000", 00:17:15.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:15.070 "listen_address": { 00:17:15.070 "trtype": "TCP", 00:17:15.070 "adrfam": "IPv4", 00:17:15.070 "traddr": "10.0.0.2", 00:17:15.070 "trsvcid": "4420" 00:17:15.070 }, 00:17:15.070 "peer_address": { 00:17:15.070 "trtype": "TCP", 00:17:15.070 "adrfam": "IPv4", 00:17:15.070 "traddr": "10.0.0.1", 00:17:15.070 "trsvcid": "57182" 00:17:15.070 }, 00:17:15.070 "auth": { 00:17:15.070 "state": "completed", 00:17:15.070 "digest": "sha384", 00:17:15.070 "dhgroup": "null" 00:17:15.070 } 00:17:15.070 } 00:17:15.070 ]' 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:15.070 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.331 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.331 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.331 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.331 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:15.331 04:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.272 04:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:16.533 00:17:16.533 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.533 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.533 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.799 { 00:17:16.799 "cntlid": 55, 00:17:16.799 "qid": 0, 00:17:16.799 "state": "enabled", 00:17:16.799 "thread": "nvmf_tgt_poll_group_000", 00:17:16.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:16.799 "listen_address": { 00:17:16.799 "trtype": "TCP", 00:17:16.799 "adrfam": "IPv4", 00:17:16.799 "traddr": "10.0.0.2", 00:17:16.799 "trsvcid": "4420" 00:17:16.799 }, 00:17:16.799 "peer_address": { 00:17:16.799 "trtype": "TCP", 00:17:16.799 "adrfam": "IPv4", 00:17:16.799 "traddr": "10.0.0.1", 00:17:16.799 "trsvcid": "57218" 00:17:16.799 }, 00:17:16.799 "auth": { 00:17:16.799 "state": "completed", 00:17:16.799 "digest": "sha384", 00:17:16.799 "dhgroup": "null" 00:17:16.799 } 00:17:16.799 } 00:17:16.799 ]' 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:16.799 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.109 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.109 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.109 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.109 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:17.109 04:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.097 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.097 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:18.358 00:17:18.358 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.358 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.358 04:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.618 { 00:17:18.618 "cntlid": 57, 00:17:18.618 "qid": 0, 00:17:18.618 "state": "enabled", 00:17:18.618 "thread": "nvmf_tgt_poll_group_000", 00:17:18.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:18.618 "listen_address": { 00:17:18.618 "trtype": "TCP", 00:17:18.618 "adrfam": "IPv4", 00:17:18.618 "traddr": "10.0.0.2", 00:17:18.618 "trsvcid": "4420" 00:17:18.618 }, 00:17:18.618 "peer_address": { 00:17:18.618 "trtype": "TCP", 00:17:18.618 "adrfam": "IPv4", 00:17:18.618 "traddr": "10.0.0.1", 00:17:18.618 "trsvcid": "57252" 00:17:18.618 }, 00:17:18.618 "auth": { 00:17:18.618 "state": "completed", 00:17:18.618 "digest": "sha384", 00:17:18.618 "dhgroup": "ffdhe2048" 00:17:18.618 } 00:17:18.618 } 00:17:18.618 ]' 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.618 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.879 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:18.879 04:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:19.820 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:20.080 00:17:20.080 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:20.080 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:20.080 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.341 { 00:17:20.341 "cntlid": 59, 00:17:20.341 "qid": 0, 00:17:20.341 "state": "enabled", 00:17:20.341 "thread": "nvmf_tgt_poll_group_000", 00:17:20.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:20.341 "listen_address": { 00:17:20.341 "trtype": "TCP", 00:17:20.341 "adrfam": "IPv4", 00:17:20.341 "traddr": "10.0.0.2", 00:17:20.341 "trsvcid": "4420" 00:17:20.341 }, 00:17:20.341 "peer_address": { 00:17:20.341 "trtype": "TCP", 00:17:20.341 "adrfam": "IPv4", 00:17:20.341 "traddr": "10.0.0.1", 00:17:20.341 "trsvcid": "57292" 00:17:20.341 }, 00:17:20.341 "auth": { 00:17:20.341 "state": "completed", 00:17:20.341 "digest": "sha384", 00:17:20.341 "dhgroup": "ffdhe2048" 00:17:20.341 } 00:17:20.341 } 00:17:20.341 ]' 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.341 04:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.601 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:20.601 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:21.542 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.542 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.542 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:21.542 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.542 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.542 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.542 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:21.542 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.542 04:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.542 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:21.802 00:17:21.802 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.802 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.802 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:22.063 { 00:17:22.063 "cntlid": 61, 00:17:22.063 "qid": 0, 00:17:22.063 "state": "enabled", 00:17:22.063 "thread": "nvmf_tgt_poll_group_000", 00:17:22.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:22.063 "listen_address": { 00:17:22.063 "trtype": "TCP", 00:17:22.063 "adrfam": "IPv4", 00:17:22.063 "traddr": "10.0.0.2", 00:17:22.063 "trsvcid": "4420" 00:17:22.063 }, 00:17:22.063 "peer_address": { 00:17:22.063 "trtype": "TCP", 00:17:22.063 "adrfam": "IPv4", 00:17:22.063 "traddr": "10.0.0.1", 00:17:22.063 "trsvcid": "57304" 00:17:22.063 }, 00:17:22.063 "auth": { 00:17:22.063 "state": "completed", 00:17:22.063 "digest": "sha384", 00:17:22.063 "dhgroup": "ffdhe2048" 00:17:22.063 } 00:17:22.063 } 00:17:22.063 ]' 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.063 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.323 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:22.323 04:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:23.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.264 04:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:23.525 00:17:23.525 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.525 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.525 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.785 { 00:17:23.785 "cntlid": 63, 00:17:23.785 "qid": 0, 00:17:23.785 "state": "enabled", 00:17:23.785 "thread": "nvmf_tgt_poll_group_000", 00:17:23.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:23.785 "listen_address": { 00:17:23.785 "trtype": "TCP", 00:17:23.785 "adrfam": "IPv4", 00:17:23.785 "traddr": "10.0.0.2", 00:17:23.785 "trsvcid": "4420" 00:17:23.785 }, 00:17:23.785 "peer_address": { 00:17:23.785 "trtype": "TCP", 00:17:23.785 "adrfam": "IPv4", 00:17:23.785 "traddr": "10.0.0.1", 00:17:23.785 "trsvcid": "57332" 00:17:23.785 }, 00:17:23.785 "auth": { 00:17:23.785 "state": "completed", 00:17:23.785 "digest": "sha384", 00:17:23.785 "dhgroup": "ffdhe2048" 00:17:23.785 } 00:17:23.785 } 00:17:23.785 ]' 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.785 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.046 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:24.046 04:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:24.988 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.248 00:17:25.248 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:25.248 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:25.248 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.509 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.509 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.509 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.509 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.509 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.509 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.509 { 00:17:25.509 "cntlid": 65, 00:17:25.509 "qid": 0, 00:17:25.509 "state": "enabled", 00:17:25.509 "thread": "nvmf_tgt_poll_group_000", 00:17:25.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:25.509 "listen_address": { 00:17:25.509 "trtype": "TCP", 00:17:25.509 "adrfam": "IPv4", 00:17:25.509 "traddr": "10.0.0.2", 00:17:25.509 "trsvcid": "4420" 00:17:25.509 }, 00:17:25.509 "peer_address": { 00:17:25.509 "trtype": "TCP", 00:17:25.509 "adrfam": "IPv4", 00:17:25.509 "traddr": "10.0.0.1", 00:17:25.509 "trsvcid": "56590" 00:17:25.509 }, 00:17:25.509 "auth": { 00:17:25.509 "state": "completed", 00:17:25.509 "digest": "sha384", 00:17:25.509 "dhgroup": "ffdhe3072" 00:17:25.509 } 00:17:25.509 } 00:17:25.509 ]' 00:17:25.509 04:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.509 04:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.509 04:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.509 04:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:25.509 04:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.509 04:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.509 04:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.509 04:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.770 04:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:25.770 04:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:26.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.711 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:26.970 00:17:26.970 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.970 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.970 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.229 { 00:17:27.229 "cntlid": 67, 00:17:27.229 "qid": 0, 00:17:27.229 "state": "enabled", 00:17:27.229 "thread": "nvmf_tgt_poll_group_000", 00:17:27.229 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:27.229 "listen_address": { 00:17:27.229 "trtype": "TCP", 00:17:27.229 "adrfam": "IPv4", 00:17:27.229 "traddr": "10.0.0.2", 00:17:27.229 "trsvcid": "4420" 00:17:27.229 }, 00:17:27.229 "peer_address": { 00:17:27.229 "trtype": "TCP", 00:17:27.229 "adrfam": "IPv4", 00:17:27.229 "traddr": "10.0.0.1", 00:17:27.229 "trsvcid": "56620" 00:17:27.229 }, 00:17:27.229 "auth": { 00:17:27.229 "state": "completed", 00:17:27.229 "digest": "sha384", 00:17:27.229 "dhgroup": "ffdhe3072" 00:17:27.229 } 00:17:27.229 } 00:17:27.229 ]' 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:27.229 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.489 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:27.489 04:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.428 04:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:28.688 00:17:28.688 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:28.688 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:28.688 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.948 { 00:17:28.948 "cntlid": 69, 00:17:28.948 "qid": 0, 00:17:28.948 "state": "enabled", 00:17:28.948 "thread": "nvmf_tgt_poll_group_000", 00:17:28.948 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:28.948 "listen_address": { 00:17:28.948 "trtype": "TCP", 00:17:28.948 "adrfam": "IPv4", 00:17:28.948 "traddr": "10.0.0.2", 00:17:28.948 "trsvcid": "4420" 00:17:28.948 }, 00:17:28.948 "peer_address": { 00:17:28.948 "trtype": "TCP", 00:17:28.948 "adrfam": "IPv4", 00:17:28.948 "traddr": "10.0.0.1", 00:17:28.948 "trsvcid": "56656" 00:17:28.948 }, 00:17:28.948 "auth": { 00:17:28.948 "state": "completed", 00:17:28.948 "digest": "sha384", 00:17:28.948 "dhgroup": "ffdhe3072" 00:17:28.948 } 00:17:28.948 } 00:17:28.948 ]' 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.948 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.208 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:29.208 04:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:29.780 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.040 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:30.040 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.040 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.040 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.040 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.040 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.040 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.300 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:30.300 00:17:30.561 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:30.561 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:30.561 04:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:30.561 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:30.561 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:30.561 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.561 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.561 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.561 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:30.561 { 00:17:30.561 "cntlid": 71, 00:17:30.561 "qid": 0, 00:17:30.561 "state": "enabled", 00:17:30.561 "thread": "nvmf_tgt_poll_group_000", 00:17:30.561 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:30.561 "listen_address": { 00:17:30.561 "trtype": "TCP", 00:17:30.561 "adrfam": "IPv4", 00:17:30.561 "traddr": "10.0.0.2", 00:17:30.561 "trsvcid": "4420" 00:17:30.561 }, 00:17:30.561 "peer_address": { 00:17:30.561 "trtype": "TCP", 00:17:30.561 "adrfam": "IPv4", 00:17:30.561 "traddr": "10.0.0.1", 00:17:30.561 "trsvcid": "56686" 00:17:30.561 }, 00:17:30.561 "auth": { 00:17:30.561 "state": "completed", 00:17:30.561 "digest": "sha384", 00:17:30.561 "dhgroup": "ffdhe3072" 00:17:30.561 } 00:17:30.561 } 00:17:30.561 ]' 00:17:30.561 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:30.561 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:30.561 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:30.822 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:30.822 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.822 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.822 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.822 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.822 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:30.822 04:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.765 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.025 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.025 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.025 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.025 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:32.025 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:32.286 { 00:17:32.286 "cntlid": 73, 00:17:32.286 "qid": 0, 00:17:32.286 "state": "enabled", 00:17:32.286 "thread": "nvmf_tgt_poll_group_000", 00:17:32.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:32.286 "listen_address": { 00:17:32.286 "trtype": "TCP", 00:17:32.286 "adrfam": "IPv4", 00:17:32.286 "traddr": "10.0.0.2", 00:17:32.286 "trsvcid": "4420" 00:17:32.286 }, 00:17:32.286 "peer_address": { 00:17:32.286 "trtype": "TCP", 00:17:32.286 "adrfam": "IPv4", 00:17:32.286 "traddr": "10.0.0.1", 00:17:32.286 "trsvcid": "56722" 00:17:32.286 }, 00:17:32.286 "auth": { 00:17:32.286 "state": "completed", 00:17:32.286 "digest": "sha384", 00:17:32.286 "dhgroup": "ffdhe4096" 00:17:32.286 } 00:17:32.286 } 00:17:32.286 ]' 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.286 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:32.547 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:32.547 04:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:32.547 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.547 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.547 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.547 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:32.547 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:33.490 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.490 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:33.490 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.490 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.490 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.490 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.490 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:33.490 04:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:33.750 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:34.011 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.011 { 00:17:34.011 "cntlid": 75, 00:17:34.011 "qid": 0, 00:17:34.011 "state": "enabled", 00:17:34.011 "thread": "nvmf_tgt_poll_group_000", 00:17:34.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:34.011 "listen_address": { 00:17:34.011 "trtype": "TCP", 00:17:34.011 "adrfam": "IPv4", 00:17:34.011 "traddr": "10.0.0.2", 00:17:34.011 "trsvcid": "4420" 00:17:34.011 }, 00:17:34.011 "peer_address": { 00:17:34.011 "trtype": "TCP", 00:17:34.011 "adrfam": "IPv4", 00:17:34.011 "traddr": "10.0.0.1", 00:17:34.011 "trsvcid": "56760" 00:17:34.011 }, 00:17:34.011 "auth": { 00:17:34.011 "state": "completed", 00:17:34.011 "digest": "sha384", 00:17:34.011 "dhgroup": "ffdhe4096" 00:17:34.011 } 00:17:34.011 } 00:17:34.011 ]' 00:17:34.011 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.271 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.271 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.271 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:34.271 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.271 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.271 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.271 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.531 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:34.531 04:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:35.103 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.103 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.103 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:35.103 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.103 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.103 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.103 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.103 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.103 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.365 04:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:35.625 00:17:35.625 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:35.625 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.625 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.886 { 00:17:35.886 "cntlid": 77, 00:17:35.886 "qid": 0, 00:17:35.886 "state": "enabled", 00:17:35.886 "thread": "nvmf_tgt_poll_group_000", 00:17:35.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:35.886 "listen_address": { 00:17:35.886 "trtype": "TCP", 00:17:35.886 "adrfam": "IPv4", 00:17:35.886 "traddr": "10.0.0.2", 00:17:35.886 "trsvcid": "4420" 00:17:35.886 }, 00:17:35.886 "peer_address": { 00:17:35.886 "trtype": "TCP", 00:17:35.886 "adrfam": "IPv4", 00:17:35.886 "traddr": "10.0.0.1", 00:17:35.886 "trsvcid": "53402" 00:17:35.886 }, 00:17:35.886 "auth": { 00:17:35.886 "state": "completed", 00:17:35.886 "digest": "sha384", 00:17:35.886 "dhgroup": "ffdhe4096" 00:17:35.886 } 00:17:35.886 } 00:17:35.886 ]' 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:35.886 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.147 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.147 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.147 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.147 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:36.147 04:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.090 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.090 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:37.350 00:17:37.350 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.350 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.350 04:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.611 { 00:17:37.611 "cntlid": 79, 00:17:37.611 "qid": 0, 00:17:37.611 "state": "enabled", 00:17:37.611 "thread": "nvmf_tgt_poll_group_000", 00:17:37.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:37.611 "listen_address": { 00:17:37.611 "trtype": "TCP", 00:17:37.611 "adrfam": "IPv4", 00:17:37.611 "traddr": "10.0.0.2", 00:17:37.611 "trsvcid": "4420" 00:17:37.611 }, 00:17:37.611 "peer_address": { 00:17:37.611 "trtype": "TCP", 00:17:37.611 "adrfam": "IPv4", 00:17:37.611 "traddr": "10.0.0.1", 00:17:37.611 "trsvcid": "53428" 00:17:37.611 }, 00:17:37.611 "auth": { 00:17:37.611 "state": "completed", 00:17:37.611 "digest": "sha384", 00:17:37.611 "dhgroup": "ffdhe4096" 00:17:37.611 } 00:17:37.611 } 00:17:37.611 ]' 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.611 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:37.872 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.872 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.872 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.872 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.872 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:37.872 04:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:38.815 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:38.816 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.816 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.816 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.816 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.816 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.816 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.816 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:38.816 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:39.387 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.387 { 00:17:39.387 "cntlid": 81, 00:17:39.387 "qid": 0, 00:17:39.387 "state": "enabled", 00:17:39.387 "thread": "nvmf_tgt_poll_group_000", 00:17:39.387 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:39.387 "listen_address": { 00:17:39.387 "trtype": "TCP", 00:17:39.387 "adrfam": "IPv4", 00:17:39.387 "traddr": "10.0.0.2", 00:17:39.387 "trsvcid": "4420" 00:17:39.387 }, 00:17:39.387 "peer_address": { 00:17:39.387 "trtype": "TCP", 00:17:39.387 "adrfam": "IPv4", 00:17:39.387 "traddr": "10.0.0.1", 00:17:39.387 "trsvcid": "53446" 00:17:39.387 }, 00:17:39.387 "auth": { 00:17:39.387 "state": "completed", 00:17:39.387 "digest": "sha384", 00:17:39.387 "dhgroup": "ffdhe6144" 00:17:39.387 } 00:17:39.387 } 00:17:39.387 ]' 00:17:39.387 04:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.649 04:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.649 04:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.649 04:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:39.649 04:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.649 04:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.649 04:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.649 04:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.910 04:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:39.910 04:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:40.481 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.481 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.481 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.481 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.481 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.481 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.481 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.481 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.741 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:41.002 00:17:41.002 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:41.002 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:41.002 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.263 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.263 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.263 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.263 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.263 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.263 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.263 { 00:17:41.263 "cntlid": 83, 00:17:41.263 "qid": 0, 00:17:41.263 "state": "enabled", 00:17:41.263 "thread": "nvmf_tgt_poll_group_000", 00:17:41.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:41.264 "listen_address": { 00:17:41.264 "trtype": "TCP", 00:17:41.264 "adrfam": "IPv4", 00:17:41.264 "traddr": "10.0.0.2", 00:17:41.264 "trsvcid": "4420" 00:17:41.264 }, 00:17:41.264 "peer_address": { 00:17:41.264 "trtype": "TCP", 00:17:41.264 "adrfam": "IPv4", 00:17:41.264 "traddr": "10.0.0.1", 00:17:41.264 "trsvcid": "53472" 00:17:41.264 }, 00:17:41.264 "auth": { 00:17:41.264 "state": "completed", 00:17:41.264 "digest": "sha384", 00:17:41.264 "dhgroup": "ffdhe6144" 00:17:41.264 } 00:17:41.264 } 00:17:41.264 ]' 00:17:41.264 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.264 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.264 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.525 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:41.525 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.525 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.525 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.525 04:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.525 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:41.526 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:42.467 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.467 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.467 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:42.467 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.467 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.467 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.467 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:42.467 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.467 04:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.467 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.038 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:43.038 { 00:17:43.038 "cntlid": 85, 00:17:43.038 "qid": 0, 00:17:43.038 "state": "enabled", 00:17:43.038 "thread": "nvmf_tgt_poll_group_000", 00:17:43.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:43.038 "listen_address": { 00:17:43.038 "trtype": "TCP", 00:17:43.038 "adrfam": "IPv4", 00:17:43.038 "traddr": "10.0.0.2", 00:17:43.038 "trsvcid": "4420" 00:17:43.038 }, 00:17:43.038 "peer_address": { 00:17:43.038 "trtype": "TCP", 00:17:43.038 "adrfam": "IPv4", 00:17:43.038 "traddr": "10.0.0.1", 00:17:43.038 "trsvcid": "53490" 00:17:43.038 }, 00:17:43.038 "auth": { 00:17:43.038 "state": "completed", 00:17:43.038 "digest": "sha384", 00:17:43.038 "dhgroup": "ffdhe6144" 00:17:43.038 } 00:17:43.038 } 00:17:43.038 ]' 00:17:43.038 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:43.299 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.299 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:43.299 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:43.299 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:43.299 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.299 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.299 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.560 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:43.560 04:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:44.131 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.131 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.131 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:44.131 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.131 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.392 04:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:44.653 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.913 { 00:17:44.913 "cntlid": 87, 00:17:44.913 "qid": 0, 00:17:44.913 "state": "enabled", 00:17:44.913 "thread": "nvmf_tgt_poll_group_000", 00:17:44.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:44.913 "listen_address": { 00:17:44.913 "trtype": "TCP", 00:17:44.913 "adrfam": "IPv4", 00:17:44.913 "traddr": "10.0.0.2", 00:17:44.913 "trsvcid": "4420" 00:17:44.913 }, 00:17:44.913 "peer_address": { 00:17:44.913 "trtype": "TCP", 00:17:44.913 "adrfam": "IPv4", 00:17:44.913 "traddr": "10.0.0.1", 00:17:44.913 "trsvcid": "55014" 00:17:44.913 }, 00:17:44.913 "auth": { 00:17:44.913 "state": "completed", 00:17:44.913 "digest": "sha384", 00:17:44.913 "dhgroup": "ffdhe6144" 00:17:44.913 } 00:17:44.913 } 00:17:44.913 ]' 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:44.913 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.174 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:45.174 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.174 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.174 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.174 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.434 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:45.434 04:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:46.007 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.007 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:46.007 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.007 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.007 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.007 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.007 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.007 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.007 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.268 04:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.839 00:17:46.839 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.839 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.839 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.839 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.839 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.839 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.839 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.101 { 00:17:47.101 "cntlid": 89, 00:17:47.101 "qid": 0, 00:17:47.101 "state": "enabled", 00:17:47.101 "thread": "nvmf_tgt_poll_group_000", 00:17:47.101 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:47.101 "listen_address": { 00:17:47.101 "trtype": "TCP", 00:17:47.101 "adrfam": "IPv4", 00:17:47.101 "traddr": "10.0.0.2", 00:17:47.101 "trsvcid": "4420" 00:17:47.101 }, 00:17:47.101 "peer_address": { 00:17:47.101 "trtype": "TCP", 00:17:47.101 "adrfam": "IPv4", 00:17:47.101 "traddr": "10.0.0.1", 00:17:47.101 "trsvcid": "55032" 00:17:47.101 }, 00:17:47.101 "auth": { 00:17:47.101 "state": "completed", 00:17:47.101 "digest": "sha384", 00:17:47.101 "dhgroup": "ffdhe8192" 00:17:47.101 } 00:17:47.101 } 00:17:47.101 ]' 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.101 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.362 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:47.362 04:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:47.932 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.193 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.193 04:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.764 00:17:48.764 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:48.764 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:48.764 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.025 { 00:17:49.025 "cntlid": 91, 00:17:49.025 "qid": 0, 00:17:49.025 "state": "enabled", 00:17:49.025 "thread": "nvmf_tgt_poll_group_000", 00:17:49.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:49.025 "listen_address": { 00:17:49.025 "trtype": "TCP", 00:17:49.025 "adrfam": "IPv4", 00:17:49.025 "traddr": "10.0.0.2", 00:17:49.025 "trsvcid": "4420" 00:17:49.025 }, 00:17:49.025 "peer_address": { 00:17:49.025 "trtype": "TCP", 00:17:49.025 "adrfam": "IPv4", 00:17:49.025 "traddr": "10.0.0.1", 00:17:49.025 "trsvcid": "55052" 00:17:49.025 }, 00:17:49.025 "auth": { 00:17:49.025 "state": "completed", 00:17:49.025 "digest": "sha384", 00:17:49.025 "dhgroup": "ffdhe8192" 00:17:49.025 } 00:17:49.025 } 00:17:49.025 ]' 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.025 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.286 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:49.286 04:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.229 04:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:50.801 00:17:50.801 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.801 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.801 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:51.062 { 00:17:51.062 "cntlid": 93, 00:17:51.062 "qid": 0, 00:17:51.062 "state": "enabled", 00:17:51.062 "thread": "nvmf_tgt_poll_group_000", 00:17:51.062 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:51.062 "listen_address": { 00:17:51.062 "trtype": "TCP", 00:17:51.062 "adrfam": "IPv4", 00:17:51.062 "traddr": "10.0.0.2", 00:17:51.062 "trsvcid": "4420" 00:17:51.062 }, 00:17:51.062 "peer_address": { 00:17:51.062 "trtype": "TCP", 00:17:51.062 "adrfam": "IPv4", 00:17:51.062 "traddr": "10.0.0.1", 00:17:51.062 "trsvcid": "55084" 00:17:51.062 }, 00:17:51.062 "auth": { 00:17:51.062 "state": "completed", 00:17:51.062 "digest": "sha384", 00:17:51.062 "dhgroup": "ffdhe8192" 00:17:51.062 } 00:17:51.062 } 00:17:51.062 ]' 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.062 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.323 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:51.323 04:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:51.894 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.894 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.894 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.894 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.894 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.894 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.894 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:51.894 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.156 04:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:52.727 00:17:52.727 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.727 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.727 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.988 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.988 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.988 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.988 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.988 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.988 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.988 { 00:17:52.988 "cntlid": 95, 00:17:52.988 "qid": 0, 00:17:52.988 "state": "enabled", 00:17:52.988 "thread": "nvmf_tgt_poll_group_000", 00:17:52.988 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:52.988 "listen_address": { 00:17:52.988 "trtype": "TCP", 00:17:52.988 "adrfam": "IPv4", 00:17:52.988 "traddr": "10.0.0.2", 00:17:52.988 "trsvcid": "4420" 00:17:52.989 }, 00:17:52.989 "peer_address": { 00:17:52.989 "trtype": "TCP", 00:17:52.989 "adrfam": "IPv4", 00:17:52.989 "traddr": "10.0.0.1", 00:17:52.989 "trsvcid": "55112" 00:17:52.989 }, 00:17:52.989 "auth": { 00:17:52.989 "state": "completed", 00:17:52.989 "digest": "sha384", 00:17:52.989 "dhgroup": "ffdhe8192" 00:17:52.989 } 00:17:52.989 } 00:17:52.989 ]' 00:17:52.989 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.989 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:52.989 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.989 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:52.989 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.989 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.989 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.989 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.249 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:53.249 04:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:54.191 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.192 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.192 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.192 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.192 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.192 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.192 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.192 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:54.452 00:17:54.452 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.452 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.452 04:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.712 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.712 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.712 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.712 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.712 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.712 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.712 { 00:17:54.712 "cntlid": 97, 00:17:54.712 "qid": 0, 00:17:54.712 "state": "enabled", 00:17:54.712 "thread": "nvmf_tgt_poll_group_000", 00:17:54.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:54.712 "listen_address": { 00:17:54.712 "trtype": "TCP", 00:17:54.712 "adrfam": "IPv4", 00:17:54.712 "traddr": "10.0.0.2", 00:17:54.712 "trsvcid": "4420" 00:17:54.712 }, 00:17:54.712 "peer_address": { 00:17:54.712 "trtype": "TCP", 00:17:54.712 "adrfam": "IPv4", 00:17:54.712 "traddr": "10.0.0.1", 00:17:54.712 "trsvcid": "55130" 00:17:54.712 }, 00:17:54.712 "auth": { 00:17:54.712 "state": "completed", 00:17:54.712 "digest": "sha512", 00:17:54.712 "dhgroup": "null" 00:17:54.712 } 00:17:54.712 } 00:17:54.712 ]' 00:17:54.712 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.712 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.712 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.713 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:54.713 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.713 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.713 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.713 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.974 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:54.974 04:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:17:55.546 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:55.806 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.807 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.807 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.807 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.807 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.807 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.807 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:55.807 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:56.067 00:17:56.067 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:56.067 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:56.067 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.328 { 00:17:56.328 "cntlid": 99, 00:17:56.328 "qid": 0, 00:17:56.328 "state": "enabled", 00:17:56.328 "thread": "nvmf_tgt_poll_group_000", 00:17:56.328 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:56.328 "listen_address": { 00:17:56.328 "trtype": "TCP", 00:17:56.328 "adrfam": "IPv4", 00:17:56.328 "traddr": "10.0.0.2", 00:17:56.328 "trsvcid": "4420" 00:17:56.328 }, 00:17:56.328 "peer_address": { 00:17:56.328 "trtype": "TCP", 00:17:56.328 "adrfam": "IPv4", 00:17:56.328 "traddr": "10.0.0.1", 00:17:56.328 "trsvcid": "46780" 00:17:56.328 }, 00:17:56.328 "auth": { 00:17:56.328 "state": "completed", 00:17:56.328 "digest": "sha512", 00:17:56.328 "dhgroup": "null" 00:17:56.328 } 00:17:56.328 } 00:17:56.328 ]' 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.328 04:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.589 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:56.589 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:17:57.532 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.532 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:57.532 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.532 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.532 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.532 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:57.532 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.532 04:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.532 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.794 00:17:57.794 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.794 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.794 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.056 { 00:17:58.056 "cntlid": 101, 00:17:58.056 "qid": 0, 00:17:58.056 "state": "enabled", 00:17:58.056 "thread": "nvmf_tgt_poll_group_000", 00:17:58.056 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:58.056 "listen_address": { 00:17:58.056 "trtype": "TCP", 00:17:58.056 "adrfam": "IPv4", 00:17:58.056 "traddr": "10.0.0.2", 00:17:58.056 "trsvcid": "4420" 00:17:58.056 }, 00:17:58.056 "peer_address": { 00:17:58.056 "trtype": "TCP", 00:17:58.056 "adrfam": "IPv4", 00:17:58.056 "traddr": "10.0.0.1", 00:17:58.056 "trsvcid": "46812" 00:17:58.056 }, 00:17:58.056 "auth": { 00:17:58.056 "state": "completed", 00:17:58.056 "digest": "sha512", 00:17:58.056 "dhgroup": "null" 00:17:58.056 } 00:17:58.056 } 00:17:58.056 ]' 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.056 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.316 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:58.317 04:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.259 04:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.520 00:17:59.520 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.520 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.520 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.782 { 00:17:59.782 "cntlid": 103, 00:17:59.782 "qid": 0, 00:17:59.782 "state": "enabled", 00:17:59.782 "thread": "nvmf_tgt_poll_group_000", 00:17:59.782 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:17:59.782 "listen_address": { 00:17:59.782 "trtype": "TCP", 00:17:59.782 "adrfam": "IPv4", 00:17:59.782 "traddr": "10.0.0.2", 00:17:59.782 "trsvcid": "4420" 00:17:59.782 }, 00:17:59.782 "peer_address": { 00:17:59.782 "trtype": "TCP", 00:17:59.782 "adrfam": "IPv4", 00:17:59.782 "traddr": "10.0.0.1", 00:17:59.782 "trsvcid": "46834" 00:17:59.782 }, 00:17:59.782 "auth": { 00:17:59.782 "state": "completed", 00:17:59.782 "digest": "sha512", 00:17:59.782 "dhgroup": "null" 00:17:59.782 } 00:17:59.782 } 00:17:59.782 ]' 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.782 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.043 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:00.043 04:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.026 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.027 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.027 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.027 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.027 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.027 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.349 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.349 { 00:18:01.349 "cntlid": 105, 00:18:01.349 "qid": 0, 00:18:01.349 "state": "enabled", 00:18:01.349 "thread": "nvmf_tgt_poll_group_000", 00:18:01.349 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:01.349 "listen_address": { 00:18:01.349 "trtype": "TCP", 00:18:01.349 "adrfam": "IPv4", 00:18:01.349 "traddr": "10.0.0.2", 00:18:01.349 "trsvcid": "4420" 00:18:01.349 }, 00:18:01.349 "peer_address": { 00:18:01.349 "trtype": "TCP", 00:18:01.349 "adrfam": "IPv4", 00:18:01.349 "traddr": "10.0.0.1", 00:18:01.349 "trsvcid": "46854" 00:18:01.349 }, 00:18:01.349 "auth": { 00:18:01.349 "state": "completed", 00:18:01.349 "digest": "sha512", 00:18:01.349 "dhgroup": "ffdhe2048" 00:18:01.349 } 00:18:01.349 } 00:18:01.349 ]' 00:18:01.349 04:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.617 04:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.617 04:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.617 04:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:01.617 04:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.617 04:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.617 04:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.617 04:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.878 04:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:01.878 04:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:02.451 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.451 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:02.451 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.451 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.451 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.451 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:02.451 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.451 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:02.712 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:18:02.712 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.712 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.712 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:02.712 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:02.712 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.713 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.713 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.713 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.713 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.713 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.713 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.713 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.975 00:18:02.975 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:02.975 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:02.975 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.236 { 00:18:03.236 "cntlid": 107, 00:18:03.236 "qid": 0, 00:18:03.236 "state": "enabled", 00:18:03.236 "thread": "nvmf_tgt_poll_group_000", 00:18:03.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:03.236 "listen_address": { 00:18:03.236 "trtype": "TCP", 00:18:03.236 "adrfam": "IPv4", 00:18:03.236 "traddr": "10.0.0.2", 00:18:03.236 "trsvcid": "4420" 00:18:03.236 }, 00:18:03.236 "peer_address": { 00:18:03.236 "trtype": "TCP", 00:18:03.236 "adrfam": "IPv4", 00:18:03.236 "traddr": "10.0.0.1", 00:18:03.236 "trsvcid": "46880" 00:18:03.236 }, 00:18:03.236 "auth": { 00:18:03.236 "state": "completed", 00:18:03.236 "digest": "sha512", 00:18:03.236 "dhgroup": "ffdhe2048" 00:18:03.236 } 00:18:03.236 } 00:18:03.236 ]' 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.236 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.497 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:03.497 04:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.440 04:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.702 00:18:04.702 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.702 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.702 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.964 { 00:18:04.964 "cntlid": 109, 00:18:04.964 "qid": 0, 00:18:04.964 "state": "enabled", 00:18:04.964 "thread": "nvmf_tgt_poll_group_000", 00:18:04.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:04.964 "listen_address": { 00:18:04.964 "trtype": "TCP", 00:18:04.964 "adrfam": "IPv4", 00:18:04.964 "traddr": "10.0.0.2", 00:18:04.964 "trsvcid": "4420" 00:18:04.964 }, 00:18:04.964 "peer_address": { 00:18:04.964 "trtype": "TCP", 00:18:04.964 "adrfam": "IPv4", 00:18:04.964 "traddr": "10.0.0.1", 00:18:04.964 "trsvcid": "60344" 00:18:04.964 }, 00:18:04.964 "auth": { 00:18:04.964 "state": "completed", 00:18:04.964 "digest": "sha512", 00:18:04.964 "dhgroup": "ffdhe2048" 00:18:04.964 } 00:18:04.964 } 00:18:04.964 ]' 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.964 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.225 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:05.225 04:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:05.797 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.059 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.320 00:18:06.320 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.320 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.320 04:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.581 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.581 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.581 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.581 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.581 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.582 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.582 { 00:18:06.582 "cntlid": 111, 00:18:06.582 "qid": 0, 00:18:06.582 "state": "enabled", 00:18:06.582 "thread": "nvmf_tgt_poll_group_000", 00:18:06.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:06.582 "listen_address": { 00:18:06.582 "trtype": "TCP", 00:18:06.582 "adrfam": "IPv4", 00:18:06.582 "traddr": "10.0.0.2", 00:18:06.582 "trsvcid": "4420" 00:18:06.582 }, 00:18:06.582 "peer_address": { 00:18:06.582 "trtype": "TCP", 00:18:06.582 "adrfam": "IPv4", 00:18:06.582 "traddr": "10.0.0.1", 00:18:06.582 "trsvcid": "60350" 00:18:06.582 }, 00:18:06.582 "auth": { 00:18:06.582 "state": "completed", 00:18:06.582 "digest": "sha512", 00:18:06.582 "dhgroup": "ffdhe2048" 00:18:06.582 } 00:18:06.582 } 00:18:06.582 ]' 00:18:06.582 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.582 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.582 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.582 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.582 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.582 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.582 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.582 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.843 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:06.843 04:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:07.786 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.787 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:08.047 00:18:08.047 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.047 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.047 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.309 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.309 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.309 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.309 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.309 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.309 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.309 { 00:18:08.309 "cntlid": 113, 00:18:08.309 "qid": 0, 00:18:08.309 "state": "enabled", 00:18:08.309 "thread": "nvmf_tgt_poll_group_000", 00:18:08.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:08.309 "listen_address": { 00:18:08.309 "trtype": "TCP", 00:18:08.309 "adrfam": "IPv4", 00:18:08.309 "traddr": "10.0.0.2", 00:18:08.309 "trsvcid": "4420" 00:18:08.309 }, 00:18:08.309 "peer_address": { 00:18:08.309 "trtype": "TCP", 00:18:08.309 "adrfam": "IPv4", 00:18:08.309 "traddr": "10.0.0.1", 00:18:08.309 "trsvcid": "60372" 00:18:08.309 }, 00:18:08.309 "auth": { 00:18:08.310 "state": "completed", 00:18:08.310 "digest": "sha512", 00:18:08.310 "dhgroup": "ffdhe3072" 00:18:08.310 } 00:18:08.310 } 00:18:08.310 ]' 00:18:08.310 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.310 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:08.310 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.310 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:08.310 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.310 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.310 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.310 04:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.571 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:08.571 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:09.515 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.515 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:09.515 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.515 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.515 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.515 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.515 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.515 04:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:09.515 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:18:09.515 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.515 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:09.515 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:09.515 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.516 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.516 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.516 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.516 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.516 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.516 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.516 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.516 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.776 00:18:09.776 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.776 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.776 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:10.037 { 00:18:10.037 "cntlid": 115, 00:18:10.037 "qid": 0, 00:18:10.037 "state": "enabled", 00:18:10.037 "thread": "nvmf_tgt_poll_group_000", 00:18:10.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:10.037 "listen_address": { 00:18:10.037 "trtype": "TCP", 00:18:10.037 "adrfam": "IPv4", 00:18:10.037 "traddr": "10.0.0.2", 00:18:10.037 "trsvcid": "4420" 00:18:10.037 }, 00:18:10.037 "peer_address": { 00:18:10.037 "trtype": "TCP", 00:18:10.037 "adrfam": "IPv4", 00:18:10.037 "traddr": "10.0.0.1", 00:18:10.037 "trsvcid": "60412" 00:18:10.037 }, 00:18:10.037 "auth": { 00:18:10.037 "state": "completed", 00:18:10.037 "digest": "sha512", 00:18:10.037 "dhgroup": "ffdhe3072" 00:18:10.037 } 00:18:10.037 } 00:18:10.037 ]' 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.037 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.297 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:10.297 04:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.239 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.240 04:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.500 00:18:11.500 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.500 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.500 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.760 { 00:18:11.760 "cntlid": 117, 00:18:11.760 "qid": 0, 00:18:11.760 "state": "enabled", 00:18:11.760 "thread": "nvmf_tgt_poll_group_000", 00:18:11.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:11.760 "listen_address": { 00:18:11.760 "trtype": "TCP", 00:18:11.760 "adrfam": "IPv4", 00:18:11.760 "traddr": "10.0.0.2", 00:18:11.760 "trsvcid": "4420" 00:18:11.760 }, 00:18:11.760 "peer_address": { 00:18:11.760 "trtype": "TCP", 00:18:11.760 "adrfam": "IPv4", 00:18:11.760 "traddr": "10.0.0.1", 00:18:11.760 "trsvcid": "60448" 00:18:11.760 }, 00:18:11.760 "auth": { 00:18:11.760 "state": "completed", 00:18:11.760 "digest": "sha512", 00:18:11.760 "dhgroup": "ffdhe3072" 00:18:11.760 } 00:18:11.760 } 00:18:11.760 ]' 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.760 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.020 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:12.020 04:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.961 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:13.222 00:18:13.222 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.222 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.222 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.482 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.482 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.482 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.482 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.482 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.482 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.482 { 00:18:13.482 "cntlid": 119, 00:18:13.482 "qid": 0, 00:18:13.482 "state": "enabled", 00:18:13.482 "thread": "nvmf_tgt_poll_group_000", 00:18:13.482 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:13.482 "listen_address": { 00:18:13.482 "trtype": "TCP", 00:18:13.482 "adrfam": "IPv4", 00:18:13.482 "traddr": "10.0.0.2", 00:18:13.482 "trsvcid": "4420" 00:18:13.482 }, 00:18:13.482 "peer_address": { 00:18:13.482 "trtype": "TCP", 00:18:13.482 "adrfam": "IPv4", 00:18:13.482 "traddr": "10.0.0.1", 00:18:13.482 "trsvcid": "60480" 00:18:13.482 }, 00:18:13.482 "auth": { 00:18:13.482 "state": "completed", 00:18:13.482 "digest": "sha512", 00:18:13.482 "dhgroup": "ffdhe3072" 00:18:13.482 } 00:18:13.482 } 00:18:13.482 ]' 00:18:13.482 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.482 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:13.482 04:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.482 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.482 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.482 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.482 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.482 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.742 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:13.742 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:14.682 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.682 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:14.682 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.682 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.682 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.682 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.682 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.682 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.682 04:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.682 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.941 00:18:14.941 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.941 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.941 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.201 { 00:18:15.201 "cntlid": 121, 00:18:15.201 "qid": 0, 00:18:15.201 "state": "enabled", 00:18:15.201 "thread": "nvmf_tgt_poll_group_000", 00:18:15.201 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:15.201 "listen_address": { 00:18:15.201 "trtype": "TCP", 00:18:15.201 "adrfam": "IPv4", 00:18:15.201 "traddr": "10.0.0.2", 00:18:15.201 "trsvcid": "4420" 00:18:15.201 }, 00:18:15.201 "peer_address": { 00:18:15.201 "trtype": "TCP", 00:18:15.201 "adrfam": "IPv4", 00:18:15.201 "traddr": "10.0.0.1", 00:18:15.201 "trsvcid": "51116" 00:18:15.201 }, 00:18:15.201 "auth": { 00:18:15.201 "state": "completed", 00:18:15.201 "digest": "sha512", 00:18:15.201 "dhgroup": "ffdhe4096" 00:18:15.201 } 00:18:15.201 } 00:18:15.201 ]' 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.201 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.462 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:15.462 04:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:16.033 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.033 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:16.033 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.033 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.033 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.033 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.033 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.033 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.294 04:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:16.555 00:18:16.555 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:16.555 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:16.555 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.815 { 00:18:16.815 "cntlid": 123, 00:18:16.815 "qid": 0, 00:18:16.815 "state": "enabled", 00:18:16.815 "thread": "nvmf_tgt_poll_group_000", 00:18:16.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:16.815 "listen_address": { 00:18:16.815 "trtype": "TCP", 00:18:16.815 "adrfam": "IPv4", 00:18:16.815 "traddr": "10.0.0.2", 00:18:16.815 "trsvcid": "4420" 00:18:16.815 }, 00:18:16.815 "peer_address": { 00:18:16.815 "trtype": "TCP", 00:18:16.815 "adrfam": "IPv4", 00:18:16.815 "traddr": "10.0.0.1", 00:18:16.815 "trsvcid": "51148" 00:18:16.815 }, 00:18:16.815 "auth": { 00:18:16.815 "state": "completed", 00:18:16.815 "digest": "sha512", 00:18:16.815 "dhgroup": "ffdhe4096" 00:18:16.815 } 00:18:16.815 } 00:18:16.815 ]' 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:16.815 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.076 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.076 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.076 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:17.076 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:17.076 04:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.016 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.276 00:18:18.276 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:18.276 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:18.276 04:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.537 { 00:18:18.537 "cntlid": 125, 00:18:18.537 "qid": 0, 00:18:18.537 "state": "enabled", 00:18:18.537 "thread": "nvmf_tgt_poll_group_000", 00:18:18.537 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:18.537 "listen_address": { 00:18:18.537 "trtype": "TCP", 00:18:18.537 "adrfam": "IPv4", 00:18:18.537 "traddr": "10.0.0.2", 00:18:18.537 "trsvcid": "4420" 00:18:18.537 }, 00:18:18.537 "peer_address": { 00:18:18.537 "trtype": "TCP", 00:18:18.537 "adrfam": "IPv4", 00:18:18.537 "traddr": "10.0.0.1", 00:18:18.537 "trsvcid": "51158" 00:18:18.537 }, 00:18:18.537 "auth": { 00:18:18.537 "state": "completed", 00:18:18.537 "digest": "sha512", 00:18:18.537 "dhgroup": "ffdhe4096" 00:18:18.537 } 00:18:18.537 } 00:18:18.537 ]' 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:18.537 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.797 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.797 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.797 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.797 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:18.797 04:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.738 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:19.738 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.739 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.999 00:18:19.999 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.999 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.999 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.260 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.260 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.260 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.260 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.260 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.260 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.260 { 00:18:20.260 "cntlid": 127, 00:18:20.260 "qid": 0, 00:18:20.260 "state": "enabled", 00:18:20.260 "thread": "nvmf_tgt_poll_group_000", 00:18:20.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:20.260 "listen_address": { 00:18:20.260 "trtype": "TCP", 00:18:20.260 "adrfam": "IPv4", 00:18:20.260 "traddr": "10.0.0.2", 00:18:20.260 "trsvcid": "4420" 00:18:20.260 }, 00:18:20.260 "peer_address": { 00:18:20.260 "trtype": "TCP", 00:18:20.260 "adrfam": "IPv4", 00:18:20.260 "traddr": "10.0.0.1", 00:18:20.260 "trsvcid": "51184" 00:18:20.260 }, 00:18:20.260 "auth": { 00:18:20.260 "state": "completed", 00:18:20.260 "digest": "sha512", 00:18:20.260 "dhgroup": "ffdhe4096" 00:18:20.260 } 00:18:20.260 } 00:18:20.260 ]' 00:18:20.260 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.260 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:20.260 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.521 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.521 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.521 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.521 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.521 04:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.521 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:20.521 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:21.462 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.462 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.462 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:21.462 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.462 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.462 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.462 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:21.462 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.462 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.462 04:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:21.462 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:18:21.462 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.462 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:21.462 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:21.462 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:21.463 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.463 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.463 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.463 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.463 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.463 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.463 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:21.463 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:22.033 00:18:22.033 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.033 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.033 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.033 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.033 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.033 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.033 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.034 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.034 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.034 { 00:18:22.034 "cntlid": 129, 00:18:22.034 "qid": 0, 00:18:22.034 "state": "enabled", 00:18:22.034 "thread": "nvmf_tgt_poll_group_000", 00:18:22.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:22.034 "listen_address": { 00:18:22.034 "trtype": "TCP", 00:18:22.034 "adrfam": "IPv4", 00:18:22.034 "traddr": "10.0.0.2", 00:18:22.034 "trsvcid": "4420" 00:18:22.034 }, 00:18:22.034 "peer_address": { 00:18:22.034 "trtype": "TCP", 00:18:22.034 "adrfam": "IPv4", 00:18:22.034 "traddr": "10.0.0.1", 00:18:22.034 "trsvcid": "51216" 00:18:22.034 }, 00:18:22.034 "auth": { 00:18:22.034 "state": "completed", 00:18:22.034 "digest": "sha512", 00:18:22.034 "dhgroup": "ffdhe6144" 00:18:22.034 } 00:18:22.034 } 00:18:22.034 ]' 00:18:22.034 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.295 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.295 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.295 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:22.295 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.295 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.295 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.295 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.555 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:22.555 04:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:23.125 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.125 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:23.125 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.125 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.125 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.125 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.125 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.125 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.386 04:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:23.646 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:23.907 { 00:18:23.907 "cntlid": 131, 00:18:23.907 "qid": 0, 00:18:23.907 "state": "enabled", 00:18:23.907 "thread": "nvmf_tgt_poll_group_000", 00:18:23.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:23.907 "listen_address": { 00:18:23.907 "trtype": "TCP", 00:18:23.907 "adrfam": "IPv4", 00:18:23.907 "traddr": "10.0.0.2", 00:18:23.907 "trsvcid": "4420" 00:18:23.907 }, 00:18:23.907 "peer_address": { 00:18:23.907 "trtype": "TCP", 00:18:23.907 "adrfam": "IPv4", 00:18:23.907 "traddr": "10.0.0.1", 00:18:23.907 "trsvcid": "51256" 00:18:23.907 }, 00:18:23.907 "auth": { 00:18:23.907 "state": "completed", 00:18:23.907 "digest": "sha512", 00:18:23.907 "dhgroup": "ffdhe6144" 00:18:23.907 } 00:18:23.907 } 00:18:23.907 ]' 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:23.907 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.167 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:24.167 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.167 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.167 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.167 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.168 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:24.168 04:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.109 04:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:25.681 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.681 { 00:18:25.681 "cntlid": 133, 00:18:25.681 "qid": 0, 00:18:25.681 "state": "enabled", 00:18:25.681 "thread": "nvmf_tgt_poll_group_000", 00:18:25.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:25.681 "listen_address": { 00:18:25.681 "trtype": "TCP", 00:18:25.681 "adrfam": "IPv4", 00:18:25.681 "traddr": "10.0.0.2", 00:18:25.681 "trsvcid": "4420" 00:18:25.681 }, 00:18:25.681 "peer_address": { 00:18:25.681 "trtype": "TCP", 00:18:25.681 "adrfam": "IPv4", 00:18:25.681 "traddr": "10.0.0.1", 00:18:25.681 "trsvcid": "54466" 00:18:25.681 }, 00:18:25.681 "auth": { 00:18:25.681 "state": "completed", 00:18:25.681 "digest": "sha512", 00:18:25.681 "dhgroup": "ffdhe6144" 00:18:25.681 } 00:18:25.681 } 00:18:25.681 ]' 00:18:25.681 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.941 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:25.941 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.941 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:25.941 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.941 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.941 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.941 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.211 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:26.211 04:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:26.782 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.782 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:26.782 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.782 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.782 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.782 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.782 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:26.782 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.043 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.304 00:18:27.304 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.304 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.304 04:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.565 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.565 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.565 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.565 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.565 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.565 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.565 { 00:18:27.565 "cntlid": 135, 00:18:27.565 "qid": 0, 00:18:27.565 "state": "enabled", 00:18:27.565 "thread": "nvmf_tgt_poll_group_000", 00:18:27.565 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:27.565 "listen_address": { 00:18:27.565 "trtype": "TCP", 00:18:27.565 "adrfam": "IPv4", 00:18:27.565 "traddr": "10.0.0.2", 00:18:27.565 "trsvcid": "4420" 00:18:27.565 }, 00:18:27.565 "peer_address": { 00:18:27.565 "trtype": "TCP", 00:18:27.565 "adrfam": "IPv4", 00:18:27.565 "traddr": "10.0.0.1", 00:18:27.565 "trsvcid": "54498" 00:18:27.565 }, 00:18:27.565 "auth": { 00:18:27.565 "state": "completed", 00:18:27.565 "digest": "sha512", 00:18:27.565 "dhgroup": "ffdhe6144" 00:18:27.565 } 00:18:27.565 } 00:18:27.565 ]' 00:18:27.565 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.565 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:27.565 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.826 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:27.826 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.826 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.826 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.826 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.826 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:27.826 04:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.768 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:28.769 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.341 00:18:29.341 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.341 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.341 04:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.601 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.601 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.601 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.601 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.601 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.601 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:29.601 { 00:18:29.601 "cntlid": 137, 00:18:29.601 "qid": 0, 00:18:29.601 "state": "enabled", 00:18:29.601 "thread": "nvmf_tgt_poll_group_000", 00:18:29.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:29.602 "listen_address": { 00:18:29.602 "trtype": "TCP", 00:18:29.602 "adrfam": "IPv4", 00:18:29.602 "traddr": "10.0.0.2", 00:18:29.602 "trsvcid": "4420" 00:18:29.602 }, 00:18:29.602 "peer_address": { 00:18:29.602 "trtype": "TCP", 00:18:29.602 "adrfam": "IPv4", 00:18:29.602 "traddr": "10.0.0.1", 00:18:29.602 "trsvcid": "54504" 00:18:29.602 }, 00:18:29.602 "auth": { 00:18:29.602 "state": "completed", 00:18:29.602 "digest": "sha512", 00:18:29.602 "dhgroup": "ffdhe8192" 00:18:29.602 } 00:18:29.602 } 00:18:29.602 ]' 00:18:29.602 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:29.602 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:29.602 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:29.602 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.602 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.602 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.602 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.602 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.862 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:29.862 04:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:30.433 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.433 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:30.433 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.433 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.433 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.433 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:30.433 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.433 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:30.693 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:18:30.693 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:30.693 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:30.693 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:30.693 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:30.693 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.693 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.693 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.694 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.694 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.694 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.694 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:30.694 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.264 00:18:31.264 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.264 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.264 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.525 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.525 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.525 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.525 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.525 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.525 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.525 { 00:18:31.525 "cntlid": 139, 00:18:31.525 "qid": 0, 00:18:31.525 "state": "enabled", 00:18:31.525 "thread": "nvmf_tgt_poll_group_000", 00:18:31.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:31.525 "listen_address": { 00:18:31.525 "trtype": "TCP", 00:18:31.525 "adrfam": "IPv4", 00:18:31.525 "traddr": "10.0.0.2", 00:18:31.525 "trsvcid": "4420" 00:18:31.525 }, 00:18:31.525 "peer_address": { 00:18:31.525 "trtype": "TCP", 00:18:31.525 "adrfam": "IPv4", 00:18:31.525 "traddr": "10.0.0.1", 00:18:31.525 "trsvcid": "54524" 00:18:31.525 }, 00:18:31.525 "auth": { 00:18:31.525 "state": "completed", 00:18:31.525 "digest": "sha512", 00:18:31.525 "dhgroup": "ffdhe8192" 00:18:31.525 } 00:18:31.525 } 00:18:31.525 ]' 00:18:31.525 04:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.525 04:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:31.525 04:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.525 04:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:31.525 04:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:31.525 04:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.525 04:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.525 04:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.785 04:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:31.785 04:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: --dhchap-ctrl-secret DHHC-1:02:Zjg0NTQzNzg1NGE0YjVjNmZkNWJhMDFkZmQxZjNmOGEzYjc4ZjdjNjVlZWU1NTcwH9+odQ==: 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.727 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.298 00:18:33.298 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.298 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.298 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.298 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.298 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.298 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.298 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.558 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.558 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.558 { 00:18:33.558 "cntlid": 141, 00:18:33.558 "qid": 0, 00:18:33.558 "state": "enabled", 00:18:33.558 "thread": "nvmf_tgt_poll_group_000", 00:18:33.558 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:33.558 "listen_address": { 00:18:33.558 "trtype": "TCP", 00:18:33.558 "adrfam": "IPv4", 00:18:33.558 "traddr": "10.0.0.2", 00:18:33.558 "trsvcid": "4420" 00:18:33.558 }, 00:18:33.558 "peer_address": { 00:18:33.558 "trtype": "TCP", 00:18:33.558 "adrfam": "IPv4", 00:18:33.558 "traddr": "10.0.0.1", 00:18:33.558 "trsvcid": "54554" 00:18:33.558 }, 00:18:33.558 "auth": { 00:18:33.558 "state": "completed", 00:18:33.558 "digest": "sha512", 00:18:33.558 "dhgroup": "ffdhe8192" 00:18:33.558 } 00:18:33.558 } 00:18:33.558 ]' 00:18:33.558 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.558 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:33.558 04:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.558 04:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.558 04:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.558 04:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.558 04:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.558 04:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.819 04:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:33.819 04:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:01:MWEwMmIyMzQyYjA3Y2I4Zjc0MDI4MzE5YmUwYmU0MjZDKhLl: 00:18:34.391 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.652 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:34.652 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:35.223 00:18:35.223 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.223 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.223 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.485 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.485 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.485 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.485 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.485 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.485 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.485 { 00:18:35.485 "cntlid": 143, 00:18:35.485 "qid": 0, 00:18:35.485 "state": "enabled", 00:18:35.485 "thread": "nvmf_tgt_poll_group_000", 00:18:35.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:35.485 "listen_address": { 00:18:35.485 "trtype": "TCP", 00:18:35.485 "adrfam": "IPv4", 00:18:35.485 "traddr": "10.0.0.2", 00:18:35.485 "trsvcid": "4420" 00:18:35.485 }, 00:18:35.485 "peer_address": { 00:18:35.485 "trtype": "TCP", 00:18:35.485 "adrfam": "IPv4", 00:18:35.485 "traddr": "10.0.0.1", 00:18:35.485 "trsvcid": "60430" 00:18:35.485 }, 00:18:35.485 "auth": { 00:18:35.485 "state": "completed", 00:18:35.485 "digest": "sha512", 00:18:35.485 "dhgroup": "ffdhe8192" 00:18:35.485 } 00:18:35.485 } 00:18:35.485 ]' 00:18:35.485 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.485 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:35.485 04:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.485 04:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:35.485 04:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.485 04:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.485 04:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.485 04:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.746 04:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:35.746 04:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.688 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:36.688 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.260 00:18:37.260 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:37.260 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:37.260 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.520 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.520 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.520 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.520 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.520 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.520 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.520 { 00:18:37.520 "cntlid": 145, 00:18:37.520 "qid": 0, 00:18:37.520 "state": "enabled", 00:18:37.520 "thread": "nvmf_tgt_poll_group_000", 00:18:37.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:37.520 "listen_address": { 00:18:37.520 "trtype": "TCP", 00:18:37.520 "adrfam": "IPv4", 00:18:37.520 "traddr": "10.0.0.2", 00:18:37.521 "trsvcid": "4420" 00:18:37.521 }, 00:18:37.521 "peer_address": { 00:18:37.521 "trtype": "TCP", 00:18:37.521 "adrfam": "IPv4", 00:18:37.521 "traddr": "10.0.0.1", 00:18:37.521 "trsvcid": "60444" 00:18:37.521 }, 00:18:37.521 "auth": { 00:18:37.521 "state": "completed", 00:18:37.521 "digest": "sha512", 00:18:37.521 "dhgroup": "ffdhe8192" 00:18:37.521 } 00:18:37.521 } 00:18:37.521 ]' 00:18:37.521 04:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.521 04:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:37.521 04:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.521 04:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.521 04:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.521 04:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.521 04:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.521 04:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.782 04:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:37.782 04:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YTg5MmY0NWI3NjlkZDZlZmYxNGI2MzYwMTliY2UyZmI3ZGIwODI4ZWEzMjhhZTI4LrnLCg==: --dhchap-ctrl-secret DHHC-1:03:YTJlOTJlNjFmNTQ4YzJhYjlmNzgzZjM1Yzc1MTkzM2EwNWM1MGFhMzAxMjU1NDA0OTE5MmJiY2UzZDRkMDVkNbhFpEE=: 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:38.725 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:39.083 request: 00:18:39.083 { 00:18:39.083 "name": "nvme0", 00:18:39.083 "trtype": "tcp", 00:18:39.083 "traddr": "10.0.0.2", 00:18:39.083 "adrfam": "ipv4", 00:18:39.083 "trsvcid": "4420", 00:18:39.083 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.083 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.083 "prchk_reftag": false, 00:18:39.083 "prchk_guard": false, 00:18:39.083 "hdgst": false, 00:18:39.083 "ddgst": false, 00:18:39.083 "dhchap_key": "key2", 00:18:39.083 "allow_unrecognized_csi": false, 00:18:39.083 "method": "bdev_nvme_attach_controller", 00:18:39.083 "req_id": 1 00:18:39.083 } 00:18:39.083 Got JSON-RPC error response 00:18:39.083 response: 00:18:39.083 { 00:18:39.083 "code": -5, 00:18:39.083 "message": "Input/output error" 00:18:39.083 } 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.083 04:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:39.674 request: 00:18:39.674 { 00:18:39.674 "name": "nvme0", 00:18:39.674 "trtype": "tcp", 00:18:39.674 "traddr": "10.0.0.2", 00:18:39.674 "adrfam": "ipv4", 00:18:39.674 "trsvcid": "4420", 00:18:39.675 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.675 "prchk_reftag": false, 00:18:39.675 "prchk_guard": false, 00:18:39.675 "hdgst": false, 00:18:39.675 "ddgst": false, 00:18:39.675 "dhchap_key": "key1", 00:18:39.675 "dhchap_ctrlr_key": "ckey2", 00:18:39.675 "allow_unrecognized_csi": false, 00:18:39.675 "method": "bdev_nvme_attach_controller", 00:18:39.675 "req_id": 1 00:18:39.675 } 00:18:39.675 Got JSON-RPC error response 00:18:39.675 response: 00:18:39.675 { 00:18:39.675 "code": -5, 00:18:39.675 "message": "Input/output error" 00:18:39.675 } 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.675 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.935 request: 00:18:39.935 { 00:18:39.935 "name": "nvme0", 00:18:39.935 "trtype": "tcp", 00:18:39.935 "traddr": "10.0.0.2", 00:18:39.935 "adrfam": "ipv4", 00:18:39.935 "trsvcid": "4420", 00:18:39.935 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:39.935 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:39.935 "prchk_reftag": false, 00:18:39.935 "prchk_guard": false, 00:18:39.935 "hdgst": false, 00:18:39.935 "ddgst": false, 00:18:39.935 "dhchap_key": "key1", 00:18:39.935 "dhchap_ctrlr_key": "ckey1", 00:18:39.935 "allow_unrecognized_csi": false, 00:18:39.935 "method": "bdev_nvme_attach_controller", 00:18:39.935 "req_id": 1 00:18:39.935 } 00:18:39.935 Got JSON-RPC error response 00:18:39.935 response: 00:18:39.935 { 00:18:39.935 "code": -5, 00:18:39.935 "message": "Input/output error" 00:18:39.935 } 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2964331 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2964331 ']' 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2964331 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2964331 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2964331' 00:18:40.197 killing process with pid 2964331 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2964331 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2964331 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2991463 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2991463 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2991463 ']' 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:40.197 04:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2991463 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2991463 ']' 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:41.140 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.401 null0 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.wiJ 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.1gw ]] 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.1gw 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.NNa 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.pqN ]] 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.pqN 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.D7Q 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.XwT ]] 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.XwT 00:18:41.401 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.402 04:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.zEg 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:41.402 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:42.343 nvme0n1 00:18:42.343 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.343 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.343 04:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.603 { 00:18:42.603 "cntlid": 1, 00:18:42.603 "qid": 0, 00:18:42.603 "state": "enabled", 00:18:42.603 "thread": "nvmf_tgt_poll_group_000", 00:18:42.603 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:42.603 "listen_address": { 00:18:42.603 "trtype": "TCP", 00:18:42.603 "adrfam": "IPv4", 00:18:42.603 "traddr": "10.0.0.2", 00:18:42.603 "trsvcid": "4420" 00:18:42.603 }, 00:18:42.603 "peer_address": { 00:18:42.603 "trtype": "TCP", 00:18:42.603 "adrfam": "IPv4", 00:18:42.603 "traddr": "10.0.0.1", 00:18:42.603 "trsvcid": "60494" 00:18:42.603 }, 00:18:42.603 "auth": { 00:18:42.603 "state": "completed", 00:18:42.603 "digest": "sha512", 00:18:42.603 "dhgroup": "ffdhe8192" 00:18:42.603 } 00:18:42.603 } 00:18:42.603 ]' 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.603 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.864 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:42.864 04:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:43.807 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.067 request: 00:18:44.067 { 00:18:44.067 "name": "nvme0", 00:18:44.067 "trtype": "tcp", 00:18:44.067 "traddr": "10.0.0.2", 00:18:44.067 "adrfam": "ipv4", 00:18:44.067 "trsvcid": "4420", 00:18:44.067 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.067 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.067 "prchk_reftag": false, 00:18:44.067 "prchk_guard": false, 00:18:44.067 "hdgst": false, 00:18:44.067 "ddgst": false, 00:18:44.067 "dhchap_key": "key3", 00:18:44.067 "allow_unrecognized_csi": false, 00:18:44.067 "method": "bdev_nvme_attach_controller", 00:18:44.067 "req_id": 1 00:18:44.067 } 00:18:44.067 Got JSON-RPC error response 00:18:44.067 response: 00:18:44.067 { 00:18:44.067 "code": -5, 00:18:44.067 "message": "Input/output error" 00:18:44.067 } 00:18:44.067 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:44.067 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.067 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.067 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.067 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:44.067 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:44.067 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:44.067 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:44.329 request: 00:18:44.329 { 00:18:44.329 "name": "nvme0", 00:18:44.329 "trtype": "tcp", 00:18:44.329 "traddr": "10.0.0.2", 00:18:44.329 "adrfam": "ipv4", 00:18:44.329 "trsvcid": "4420", 00:18:44.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.329 "prchk_reftag": false, 00:18:44.329 "prchk_guard": false, 00:18:44.329 "hdgst": false, 00:18:44.329 "ddgst": false, 00:18:44.329 "dhchap_key": "key3", 00:18:44.329 "allow_unrecognized_csi": false, 00:18:44.329 "method": "bdev_nvme_attach_controller", 00:18:44.329 "req_id": 1 00:18:44.329 } 00:18:44.329 Got JSON-RPC error response 00:18:44.329 response: 00:18:44.329 { 00:18:44.329 "code": -5, 00:18:44.329 "message": "Input/output error" 00:18:44.329 } 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.329 04:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:44.590 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:44.850 request: 00:18:44.850 { 00:18:44.850 "name": "nvme0", 00:18:44.850 "trtype": "tcp", 00:18:44.850 "traddr": "10.0.0.2", 00:18:44.850 "adrfam": "ipv4", 00:18:44.850 "trsvcid": "4420", 00:18:44.850 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:44.850 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:44.850 "prchk_reftag": false, 00:18:44.850 "prchk_guard": false, 00:18:44.850 "hdgst": false, 00:18:44.850 "ddgst": false, 00:18:44.850 "dhchap_key": "key0", 00:18:44.850 "dhchap_ctrlr_key": "key1", 00:18:44.850 "allow_unrecognized_csi": false, 00:18:44.850 "method": "bdev_nvme_attach_controller", 00:18:44.850 "req_id": 1 00:18:44.850 } 00:18:44.850 Got JSON-RPC error response 00:18:44.850 response: 00:18:44.850 { 00:18:44.850 "code": -5, 00:18:44.850 "message": "Input/output error" 00:18:44.850 } 00:18:44.850 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:44.850 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:44.850 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:44.850 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:44.850 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:44.850 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:44.850 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:45.110 nvme0n1 00:18:45.110 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:45.110 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:45.110 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.369 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.369 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.369 04:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:45.629 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:18:45.629 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.629 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.629 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.629 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:45.629 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:45.629 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:46.579 nvme0n1 00:18:46.579 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:46.579 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:46.579 04:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.579 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.579 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:46.579 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.579 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.579 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.579 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:46.579 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:46.579 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.840 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.840 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:46.840 04:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: --dhchap-ctrl-secret DHHC-1:03:ODFhODkwNzJiZjc3YTQwNWJiOTBiOGIzZTYzZGI0ODc0MjU4ZDM3N2MzM2FjOTE4MTAzMWU0OTIyYjgxZDgxMp3MNjY=: 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:47.781 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:48.353 request: 00:18:48.353 { 00:18:48.353 "name": "nvme0", 00:18:48.353 "trtype": "tcp", 00:18:48.353 "traddr": "10.0.0.2", 00:18:48.353 "adrfam": "ipv4", 00:18:48.353 "trsvcid": "4420", 00:18:48.353 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:48.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:18:48.353 "prchk_reftag": false, 00:18:48.353 "prchk_guard": false, 00:18:48.353 "hdgst": false, 00:18:48.353 "ddgst": false, 00:18:48.353 "dhchap_key": "key1", 00:18:48.353 "allow_unrecognized_csi": false, 00:18:48.353 "method": "bdev_nvme_attach_controller", 00:18:48.353 "req_id": 1 00:18:48.353 } 00:18:48.353 Got JSON-RPC error response 00:18:48.353 response: 00:18:48.353 { 00:18:48.353 "code": -5, 00:18:48.353 "message": "Input/output error" 00:18:48.353 } 00:18:48.353 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:48.353 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:48.353 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:48.353 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:48.353 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.353 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.353 04:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:48.924 nvme0n1 00:18:49.185 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:49.185 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:49.185 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.185 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.185 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.185 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.445 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:49.445 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.445 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.445 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.445 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:49.445 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:49.445 04:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:49.704 nvme0n1 00:18:49.704 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:49.704 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:49.704 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: '' 2s 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: ]] 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YzRhZGM4NGNjZDk5YTk1M2Q4NzM4NWI4YjY2M2ZjZGH/Q9aW: 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:49.964 04:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: 2s 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: ]] 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZTJhMDU4OTkzZjhmZGM4YTgyNmQyZGFjOTFmNDhkODI4MTQxMDAwZmY2ZTc5Mzg5WNJcFQ==: 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:52.504 04:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:54.417 04:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:54.988 nvme0n1 00:18:54.988 04:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:54.988 04:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.988 04:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.988 04:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.988 04:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:54.988 04:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:55.560 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:55.560 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:55.560 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:55.821 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:56.082 04:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:56.653 request: 00:18:56.653 { 00:18:56.653 "name": "nvme0", 00:18:56.653 "dhchap_key": "key1", 00:18:56.653 "dhchap_ctrlr_key": "key3", 00:18:56.653 "method": "bdev_nvme_set_keys", 00:18:56.653 "req_id": 1 00:18:56.653 } 00:18:56.653 Got JSON-RPC error response 00:18:56.653 response: 00:18:56.653 { 00:18:56.653 "code": -13, 00:18:56.653 "message": "Permission denied" 00:18:56.653 } 00:18:56.653 04:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:56.653 04:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:56.653 04:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:56.653 04:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:56.653 04:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:56.653 04:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:56.653 04:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.914 04:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:56.914 04:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:57.856 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:57.856 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:57.856 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.117 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:58.117 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:58.117 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.117 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.117 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.117 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:58.117 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:58.117 04:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:59.060 nvme0n1 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:59.060 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:59.321 request: 00:18:59.321 { 00:18:59.321 "name": "nvme0", 00:18:59.321 "dhchap_key": "key2", 00:18:59.321 "dhchap_ctrlr_key": "key0", 00:18:59.321 "method": "bdev_nvme_set_keys", 00:18:59.321 "req_id": 1 00:18:59.321 } 00:18:59.321 Got JSON-RPC error response 00:18:59.321 response: 00:18:59.321 { 00:18:59.321 "code": -13, 00:18:59.321 "message": "Permission denied" 00:18:59.321 } 00:18:59.321 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:59.321 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:59.321 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:59.321 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:59.321 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:59.321 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:59.321 04:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.582 04:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:59.582 04:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:19:00.524 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:19:00.524 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:19:00.524 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2964388 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2964388 ']' 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2964388 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2964388 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2964388' 00:19:00.785 killing process with pid 2964388 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2964388 00:19:00.785 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2964388 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:01.046 rmmod nvme_tcp 00:19:01.046 rmmod nvme_fabrics 00:19:01.046 rmmod nvme_keyring 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2991463 ']' 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2991463 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2991463 ']' 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2991463 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2991463 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2991463' 00:19:01.046 killing process with pid 2991463 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2991463 00:19:01.046 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2991463 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:01.308 04:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.219 04:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:03.480 04:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.wiJ /tmp/spdk.key-sha256.NNa /tmp/spdk.key-sha384.D7Q /tmp/spdk.key-sha512.zEg /tmp/spdk.key-sha512.1gw /tmp/spdk.key-sha384.pqN /tmp/spdk.key-sha256.XwT '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:19:03.480 00:19:03.480 real 2m45.371s 00:19:03.480 user 6m8.995s 00:19:03.480 sys 0m24.164s 00:19:03.480 04:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:03.480 04:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.480 ************************************ 00:19:03.480 END TEST nvmf_auth_target 00:19:03.480 ************************************ 00:19:03.480 04:30:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:19:03.480 04:30:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:03.480 04:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:19:03.480 04:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:03.480 04:30:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:03.480 ************************************ 00:19:03.480 START TEST nvmf_bdevio_no_huge 00:19:03.480 ************************************ 00:19:03.480 04:30:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:03.480 * Looking for test storage... 00:19:03.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:03.480 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:03.480 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:19:03.480 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:03.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.742 --rc genhtml_branch_coverage=1 00:19:03.742 --rc genhtml_function_coverage=1 00:19:03.742 --rc genhtml_legend=1 00:19:03.742 --rc geninfo_all_blocks=1 00:19:03.742 --rc geninfo_unexecuted_blocks=1 00:19:03.742 00:19:03.742 ' 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:03.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.742 --rc genhtml_branch_coverage=1 00:19:03.742 --rc genhtml_function_coverage=1 00:19:03.742 --rc genhtml_legend=1 00:19:03.742 --rc geninfo_all_blocks=1 00:19:03.742 --rc geninfo_unexecuted_blocks=1 00:19:03.742 00:19:03.742 ' 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:03.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.742 --rc genhtml_branch_coverage=1 00:19:03.742 --rc genhtml_function_coverage=1 00:19:03.742 --rc genhtml_legend=1 00:19:03.742 --rc geninfo_all_blocks=1 00:19:03.742 --rc geninfo_unexecuted_blocks=1 00:19:03.742 00:19:03.742 ' 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:03.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:03.742 --rc genhtml_branch_coverage=1 00:19:03.742 --rc genhtml_function_coverage=1 00:19:03.742 --rc genhtml_legend=1 00:19:03.742 --rc geninfo_all_blocks=1 00:19:03.742 --rc geninfo_unexecuted_blocks=1 00:19:03.742 00:19:03.742 ' 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.742 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:03.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:19:03.743 04:30:17 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:11.880 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:11.880 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:11.880 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:11.880 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:11.880 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:11.880 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.880 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.495 ms 00:19:11.880 00:19:11.880 --- 10.0.0.2 ping statistics --- 00:19:11.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.881 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.881 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.881 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:19:11.881 00:19:11.881 --- 10.0.0.1 ping statistics --- 00:19:11.881 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.881 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=3000350 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 3000350 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 3000350 ']' 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:11.881 04:30:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.881 [2024-11-05 04:30:24.482431] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:11.881 [2024-11-05 04:30:24.482516] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:11.881 [2024-11-05 04:30:24.587771] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.881 [2024-11-05 04:30:24.646809] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.881 [2024-11-05 04:30:24.646860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.881 [2024-11-05 04:30:24.646868] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.881 [2024-11-05 04:30:24.646876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.881 [2024-11-05 04:30:24.646882] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.881 [2024-11-05 04:30:24.648413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:19:11.881 [2024-11-05 04:30:24.648575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:19:11.881 [2024-11-05 04:30:24.648738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.881 [2024-11-05 04:30:24.648739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.881 [2024-11-05 04:30:25.346648] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.881 Malloc0 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:11.881 [2024-11-05 04:30:25.400714] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:11.881 { 00:19:11.881 "params": { 00:19:11.881 "name": "Nvme$subsystem", 00:19:11.881 "trtype": "$TEST_TRANSPORT", 00:19:11.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:11.881 "adrfam": "ipv4", 00:19:11.881 "trsvcid": "$NVMF_PORT", 00:19:11.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:11.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:11.881 "hdgst": ${hdgst:-false}, 00:19:11.881 "ddgst": ${ddgst:-false} 00:19:11.881 }, 00:19:11.881 "method": "bdev_nvme_attach_controller" 00:19:11.881 } 00:19:11.881 EOF 00:19:11.881 )") 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:19:11.881 04:30:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:11.881 "params": { 00:19:11.881 "name": "Nvme1", 00:19:11.881 "trtype": "tcp", 00:19:11.881 "traddr": "10.0.0.2", 00:19:11.881 "adrfam": "ipv4", 00:19:11.881 "trsvcid": "4420", 00:19:11.881 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.881 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.881 "hdgst": false, 00:19:11.881 "ddgst": false 00:19:11.881 }, 00:19:11.881 "method": "bdev_nvme_attach_controller" 00:19:11.881 }' 00:19:11.881 [2024-11-05 04:30:25.460556] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:11.881 [2024-11-05 04:30:25.460624] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3000545 ] 00:19:12.142 [2024-11-05 04:30:25.540174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:12.142 [2024-11-05 04:30:25.595776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.142 [2024-11-05 04:30:25.595848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.142 [2024-11-05 04:30:25.596039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.402 I/O targets: 00:19:12.402 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:12.402 00:19:12.402 00:19:12.402 CUnit - A unit testing framework for C - Version 2.1-3 00:19:12.402 http://cunit.sourceforge.net/ 00:19:12.402 00:19:12.402 00:19:12.402 Suite: bdevio tests on: Nvme1n1 00:19:12.402 Test: blockdev write read block ...passed 00:19:12.402 Test: blockdev write zeroes read block ...passed 00:19:12.402 Test: blockdev write zeroes read no split ...passed 00:19:12.663 Test: blockdev write zeroes read split ...passed 00:19:12.663 Test: blockdev write zeroes read split partial ...passed 00:19:12.663 Test: blockdev reset ...[2024-11-05 04:30:26.059122] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:19:12.663 [2024-11-05 04:30:26.059181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x193a800 (9): Bad file descriptor 00:19:12.663 [2024-11-05 04:30:26.210504] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:19:12.663 passed 00:19:12.663 Test: blockdev write read 8 blocks ...passed 00:19:12.663 Test: blockdev write read size > 128k ...passed 00:19:12.663 Test: blockdev write read invalid size ...passed 00:19:12.663 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:12.663 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:12.663 Test: blockdev write read max offset ...passed 00:19:12.924 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:12.924 Test: blockdev writev readv 8 blocks ...passed 00:19:12.924 Test: blockdev writev readv 30 x 1block ...passed 00:19:12.924 Test: blockdev writev readv block ...passed 00:19:12.924 Test: blockdev writev readv size > 128k ...passed 00:19:12.924 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:12.924 Test: blockdev comparev and writev ...[2024-11-05 04:30:26.395320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.924 [2024-11-05 04:30:26.395347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.395358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.924 [2024-11-05 04:30:26.395364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.395869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.924 [2024-11-05 04:30:26.395879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.395888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.924 [2024-11-05 04:30:26.395894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.396359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.924 [2024-11-05 04:30:26.396367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.396382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.924 [2024-11-05 04:30:26.396388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.396866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.924 [2024-11-05 04:30:26.396874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.396884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:12.924 [2024-11-05 04:30:26.396889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:12.924 passed 00:19:12.924 Test: blockdev nvme passthru rw ...passed 00:19:12.924 Test: blockdev nvme passthru vendor specific ...[2024-11-05 04:30:26.481566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:12.924 [2024-11-05 04:30:26.481577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.481906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:12.924 [2024-11-05 04:30:26.481914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.482243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:12.924 [2024-11-05 04:30:26.482250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:12.924 [2024-11-05 04:30:26.482581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:12.924 [2024-11-05 04:30:26.482590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:12.924 passed 00:19:12.924 Test: blockdev nvme admin passthru ...passed 00:19:12.924 Test: blockdev copy ...passed 00:19:12.924 00:19:12.924 Run Summary: Type Total Ran Passed Failed Inactive 00:19:12.924 suites 1 1 n/a 0 0 00:19:12.924 tests 23 23 23 0 0 00:19:12.924 asserts 152 152 152 0 n/a 00:19:12.924 00:19:12.924 Elapsed time = 1.255 seconds 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:13.185 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:13.185 rmmod nvme_tcp 00:19:13.446 rmmod nvme_fabrics 00:19:13.446 rmmod nvme_keyring 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 3000350 ']' 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 3000350 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 3000350 ']' 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 3000350 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3000350 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3000350' 00:19:13.446 killing process with pid 3000350 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 3000350 00:19:13.446 04:30:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 3000350 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.707 04:30:27 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:16.250 00:19:16.250 real 0m12.466s 00:19:16.250 user 0m15.029s 00:19:16.250 sys 0m6.438s 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:19:16.250 ************************************ 00:19:16.250 END TEST nvmf_bdevio_no_huge 00:19:16.250 ************************************ 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.250 ************************************ 00:19:16.250 START TEST nvmf_tls 00:19:16.250 ************************************ 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:16.250 * Looking for test storage... 00:19:16.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:16.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.250 --rc genhtml_branch_coverage=1 00:19:16.250 --rc genhtml_function_coverage=1 00:19:16.250 --rc genhtml_legend=1 00:19:16.250 --rc geninfo_all_blocks=1 00:19:16.250 --rc geninfo_unexecuted_blocks=1 00:19:16.250 00:19:16.250 ' 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:16.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.250 --rc genhtml_branch_coverage=1 00:19:16.250 --rc genhtml_function_coverage=1 00:19:16.250 --rc genhtml_legend=1 00:19:16.250 --rc geninfo_all_blocks=1 00:19:16.250 --rc geninfo_unexecuted_blocks=1 00:19:16.250 00:19:16.250 ' 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:16.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.250 --rc genhtml_branch_coverage=1 00:19:16.250 --rc genhtml_function_coverage=1 00:19:16.250 --rc genhtml_legend=1 00:19:16.250 --rc geninfo_all_blocks=1 00:19:16.250 --rc geninfo_unexecuted_blocks=1 00:19:16.250 00:19:16.250 ' 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:16.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.250 --rc genhtml_branch_coverage=1 00:19:16.250 --rc genhtml_function_coverage=1 00:19:16.250 --rc genhtml_legend=1 00:19:16.250 --rc geninfo_all_blocks=1 00:19:16.250 --rc geninfo_unexecuted_blocks=1 00:19:16.250 00:19:16.250 ' 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.250 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:16.251 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:19:16.251 04:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:24.389 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:24.390 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:24.390 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:24.390 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:24.390 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:24.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:24.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:19:24.390 00:19:24.390 --- 10.0.0.2 ping statistics --- 00:19:24.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.390 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:24.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:24.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:19:24.390 00:19:24.390 --- 10.0.0.1 ping statistics --- 00:19:24.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:24.390 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:24.390 04:30:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3005109 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3005109 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3005109 ']' 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.390 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:24.391 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.391 [2024-11-05 04:30:37.076514] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:24.391 [2024-11-05 04:30:37.076583] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.391 [2024-11-05 04:30:37.176513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.391 [2024-11-05 04:30:37.226726] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.391 [2024-11-05 04:30:37.226792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.391 [2024-11-05 04:30:37.226801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.391 [2024-11-05 04:30:37.226808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.391 [2024-11-05 04:30:37.226814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.391 [2024-11-05 04:30:37.227586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.391 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:24.391 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:24.391 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:24.391 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:24.391 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.391 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:24.391 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:19:24.391 04:30:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:24.651 true 00:19:24.651 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.651 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:19:24.912 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:19:24.912 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:19:24.912 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:24.912 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:24.912 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:19:25.173 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:19:25.173 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:19:25.173 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:25.434 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.434 04:30:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:19:25.694 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:19:25.694 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:19:25.694 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.694 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:19:25.694 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:19:25.694 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:19:25.694 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:25.955 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:25.955 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:26.214 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:19:26.214 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:19:26.214 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:26.473 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:26.473 04:30:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:26.473 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:19:26.473 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:19:26.473 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:19:26.473 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:19:26.474 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.EPTSa5OLTr 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.UrHiR76Y6t 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.EPTSa5OLTr 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.UrHiR76Y6t 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:26.733 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:26.994 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.EPTSa5OLTr 00:19:26.994 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.EPTSa5OLTr 00:19:26.994 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:27.254 [2024-11-05 04:30:40.693589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:27.254 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:27.254 04:30:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:27.514 [2024-11-05 04:30:41.030405] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:27.514 [2024-11-05 04:30:41.030614] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:27.514 04:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:27.775 malloc0 00:19:27.775 04:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:27.775 04:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.EPTSa5OLTr 00:19:28.036 04:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.297 04:30:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.EPTSa5OLTr 00:19:38.296 Initializing NVMe Controllers 00:19:38.296 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:38.296 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:38.296 Initialization complete. Launching workers. 00:19:38.296 ======================================================== 00:19:38.297 Latency(us) 00:19:38.297 Device Information : IOPS MiB/s Average min max 00:19:38.297 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18377.18 71.79 3482.64 1179.02 4129.07 00:19:38.297 ======================================================== 00:19:38.297 Total : 18377.18 71.79 3482.64 1179.02 4129.07 00:19:38.297 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.EPTSa5OLTr 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EPTSa5OLTr 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3007967 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3007967 /var/tmp/bdevperf.sock 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3007967 ']' 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:38.297 04:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:38.297 [2024-11-05 04:30:51.865016] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:38.297 [2024-11-05 04:30:51.865076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007967 ] 00:19:38.297 [2024-11-05 04:30:51.922283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.563 [2024-11-05 04:30:51.951533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.563 04:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:38.563 04:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:38.563 04:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EPTSa5OLTr 00:19:38.563 04:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:38.850 [2024-11-05 04:30:52.336609] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.850 TLSTESTn1 00:19:38.850 04:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:39.143 Running I/O for 10 seconds... 00:19:41.025 4911.00 IOPS, 19.18 MiB/s [2024-11-05T03:30:55.605Z] 5168.00 IOPS, 20.19 MiB/s [2024-11-05T03:30:56.544Z] 5491.33 IOPS, 21.45 MiB/s [2024-11-05T03:30:57.927Z] 5440.50 IOPS, 21.25 MiB/s [2024-11-05T03:30:58.868Z] 5359.60 IOPS, 20.94 MiB/s [2024-11-05T03:30:59.808Z] 5430.50 IOPS, 21.21 MiB/s [2024-11-05T03:31:00.750Z] 5420.29 IOPS, 21.17 MiB/s [2024-11-05T03:31:01.690Z] 5339.00 IOPS, 20.86 MiB/s [2024-11-05T03:31:02.632Z] 5198.78 IOPS, 20.31 MiB/s [2024-11-05T03:31:02.632Z] 5178.00 IOPS, 20.23 MiB/s 00:19:48.992 Latency(us) 00:19:48.992 [2024-11-05T03:31:02.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:48.992 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:48.992 Verification LBA range: start 0x0 length 0x2000 00:19:48.992 TLSTESTn1 : 10.01 5183.70 20.25 0.00 0.00 24659.87 5570.56 24357.55 00:19:48.992 [2024-11-05T03:31:02.632Z] =================================================================================================================== 00:19:48.992 [2024-11-05T03:31:02.632Z] Total : 5183.70 20.25 0.00 0.00 24659.87 5570.56 24357.55 00:19:48.992 { 00:19:48.992 "results": [ 00:19:48.992 { 00:19:48.992 "job": "TLSTESTn1", 00:19:48.992 "core_mask": "0x4", 00:19:48.992 "workload": "verify", 00:19:48.992 "status": "finished", 00:19:48.992 "verify_range": { 00:19:48.992 "start": 0, 00:19:48.992 "length": 8192 00:19:48.992 }, 00:19:48.992 "queue_depth": 128, 00:19:48.992 "io_size": 4096, 00:19:48.992 "runtime": 10.013695, 00:19:48.992 "iops": 5183.700921587885, 00:19:48.992 "mibps": 20.248831724952677, 00:19:48.992 "io_failed": 0, 00:19:48.992 "io_timeout": 0, 00:19:48.992 "avg_latency_us": 24659.87115255195, 00:19:48.992 "min_latency_us": 5570.56, 00:19:48.992 "max_latency_us": 24357.546666666665 00:19:48.992 } 00:19:48.992 ], 00:19:48.992 "core_count": 1 00:19:48.992 } 00:19:48.992 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:48.992 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3007967 00:19:48.992 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3007967 ']' 00:19:48.992 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3007967 00:19:48.992 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:48.992 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:48.992 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3007967 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3007967' 00:19:49.253 killing process with pid 3007967 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3007967 00:19:49.253 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.253 00:19:49.253 Latency(us) 00:19:49.253 [2024-11-05T03:31:02.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.253 [2024-11-05T03:31:02.893Z] =================================================================================================================== 00:19:49.253 [2024-11-05T03:31:02.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3007967 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UrHiR76Y6t 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UrHiR76Y6t 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.UrHiR76Y6t 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.UrHiR76Y6t 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3010009 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3010009 /var/tmp/bdevperf.sock 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3010009 ']' 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:49.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:49.253 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.253 [2024-11-05 04:31:02.799398] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:49.253 [2024-11-05 04:31:02.799458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010009 ] 00:19:49.253 [2024-11-05 04:31:02.856550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.253 [2024-11-05 04:31:02.885146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.514 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:49.514 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:49.514 04:31:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.UrHiR76Y6t 00:19:49.514 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:49.775 [2024-11-05 04:31:03.270227] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:49.775 [2024-11-05 04:31:03.280892] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:49.775 [2024-11-05 04:31:03.281496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142bb0 (107): Transport endpoint is not connected 00:19:49.775 [2024-11-05 04:31:03.282491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2142bb0 (9): Bad file descriptor 00:19:49.775 [2024-11-05 04:31:03.283493] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:49.775 [2024-11-05 04:31:03.283501] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:49.775 [2024-11-05 04:31:03.283507] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:49.775 [2024-11-05 04:31:03.283516] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:49.775 request: 00:19:49.775 { 00:19:49.775 "name": "TLSTEST", 00:19:49.775 "trtype": "tcp", 00:19:49.775 "traddr": "10.0.0.2", 00:19:49.775 "adrfam": "ipv4", 00:19:49.775 "trsvcid": "4420", 00:19:49.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:49.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:49.775 "prchk_reftag": false, 00:19:49.775 "prchk_guard": false, 00:19:49.775 "hdgst": false, 00:19:49.775 "ddgst": false, 00:19:49.775 "psk": "key0", 00:19:49.775 "allow_unrecognized_csi": false, 00:19:49.775 "method": "bdev_nvme_attach_controller", 00:19:49.775 "req_id": 1 00:19:49.775 } 00:19:49.775 Got JSON-RPC error response 00:19:49.775 response: 00:19:49.775 { 00:19:49.775 "code": -5, 00:19:49.775 "message": "Input/output error" 00:19:49.775 } 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3010009 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3010009 ']' 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3010009 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3010009 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3010009' 00:19:49.775 killing process with pid 3010009 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3010009 00:19:49.775 Received shutdown signal, test time was about 10.000000 seconds 00:19:49.775 00:19:49.775 Latency(us) 00:19:49.775 [2024-11-05T03:31:03.415Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:49.775 [2024-11-05T03:31:03.415Z] =================================================================================================================== 00:19:49.775 [2024-11-05T03:31:03.415Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:49.775 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3010009 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EPTSa5OLTr 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EPTSa5OLTr 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.EPTSa5OLTr 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EPTSa5OLTr 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3010325 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3010325 /var/tmp/bdevperf.sock 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3010325 ']' 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.036 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.036 [2024-11-05 04:31:03.526454] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:50.036 [2024-11-05 04:31:03.526528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010325 ] 00:19:50.036 [2024-11-05 04:31:03.585914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.036 [2024-11-05 04:31:03.614466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.297 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:50.297 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:50.297 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EPTSa5OLTr 00:19:50.297 04:31:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:50.558 [2024-11-05 04:31:04.031461] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:50.558 [2024-11-05 04:31:04.042625] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:50.558 [2024-11-05 04:31:04.042644] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:50.558 [2024-11-05 04:31:04.042663] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:50.558 [2024-11-05 04:31:04.043601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a54bb0 (107): Transport endpoint is not connected 00:19:50.558 [2024-11-05 04:31:04.044597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a54bb0 (9): Bad file descriptor 00:19:50.558 [2024-11-05 04:31:04.045599] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:50.558 [2024-11-05 04:31:04.045609] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:50.558 [2024-11-05 04:31:04.045615] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:50.558 [2024-11-05 04:31:04.045627] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:50.558 request: 00:19:50.558 { 00:19:50.558 "name": "TLSTEST", 00:19:50.558 "trtype": "tcp", 00:19:50.558 "traddr": "10.0.0.2", 00:19:50.558 "adrfam": "ipv4", 00:19:50.558 "trsvcid": "4420", 00:19:50.558 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.558 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:50.558 "prchk_reftag": false, 00:19:50.558 "prchk_guard": false, 00:19:50.558 "hdgst": false, 00:19:50.558 "ddgst": false, 00:19:50.558 "psk": "key0", 00:19:50.558 "allow_unrecognized_csi": false, 00:19:50.558 "method": "bdev_nvme_attach_controller", 00:19:50.558 "req_id": 1 00:19:50.558 } 00:19:50.558 Got JSON-RPC error response 00:19:50.558 response: 00:19:50.558 { 00:19:50.558 "code": -5, 00:19:50.558 "message": "Input/output error" 00:19:50.558 } 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3010325 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3010325 ']' 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3010325 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3010325 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3010325' 00:19:50.558 killing process with pid 3010325 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3010325 00:19:50.558 Received shutdown signal, test time was about 10.000000 seconds 00:19:50.558 00:19:50.558 Latency(us) 00:19:50.558 [2024-11-05T03:31:04.198Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.558 [2024-11-05T03:31:04.198Z] =================================================================================================================== 00:19:50.558 [2024-11-05T03:31:04.198Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:50.558 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3010325 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EPTSa5OLTr 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EPTSa5OLTr 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.EPTSa5OLTr 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.EPTSa5OLTr 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3010340 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3010340 /var/tmp/bdevperf.sock 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3010340 ']' 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.819 [2024-11-05 04:31:04.275363] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:50.819 [2024-11-05 04:31:04.275423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010340 ] 00:19:50.819 [2024-11-05 04:31:04.332230] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.819 [2024-11-05 04:31:04.360873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:50.819 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.EPTSa5OLTr 00:19:51.080 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.340 [2024-11-05 04:31:04.737833] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.340 [2024-11-05 04:31:04.744334] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:51.340 [2024-11-05 04:31:04.744352] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:51.340 [2024-11-05 04:31:04.744370] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:51.340 [2024-11-05 04:31:04.744806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2bb0 (107): Transport endpoint is not connected 00:19:51.340 [2024-11-05 04:31:04.745801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2bb0 (9): Bad file descriptor 00:19:51.340 [2024-11-05 04:31:04.746804] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:51.340 [2024-11-05 04:31:04.746813] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:51.340 [2024-11-05 04:31:04.746819] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:51.340 [2024-11-05 04:31:04.746829] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:51.340 request: 00:19:51.340 { 00:19:51.340 "name": "TLSTEST", 00:19:51.340 "trtype": "tcp", 00:19:51.340 "traddr": "10.0.0.2", 00:19:51.340 "adrfam": "ipv4", 00:19:51.340 "trsvcid": "4420", 00:19:51.340 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:51.340 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.340 "prchk_reftag": false, 00:19:51.340 "prchk_guard": false, 00:19:51.340 "hdgst": false, 00:19:51.340 "ddgst": false, 00:19:51.340 "psk": "key0", 00:19:51.340 "allow_unrecognized_csi": false, 00:19:51.340 "method": "bdev_nvme_attach_controller", 00:19:51.340 "req_id": 1 00:19:51.340 } 00:19:51.340 Got JSON-RPC error response 00:19:51.340 response: 00:19:51.340 { 00:19:51.340 "code": -5, 00:19:51.340 "message": "Input/output error" 00:19:51.340 } 00:19:51.340 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3010340 00:19:51.340 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3010340 ']' 00:19:51.340 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3010340 00:19:51.340 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3010340 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3010340' 00:19:51.341 killing process with pid 3010340 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3010340 00:19:51.341 Received shutdown signal, test time was about 10.000000 seconds 00:19:51.341 00:19:51.341 Latency(us) 00:19:51.341 [2024-11-05T03:31:04.981Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:51.341 [2024-11-05T03:31:04.981Z] =================================================================================================================== 00:19:51.341 [2024-11-05T03:31:04.981Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3010340 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3010577 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3010577 /var/tmp/bdevperf.sock 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3010577 ']' 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:51.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:51.341 04:31:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:51.341 [2024-11-05 04:31:04.973689] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:51.341 [2024-11-05 04:31:04.973755] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3010577 ] 00:19:51.602 [2024-11-05 04:31:05.030713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.602 [2024-11-05 04:31:05.059462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:51.602 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:51.602 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:51.602 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:51.862 [2024-11-05 04:31:05.288237] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:51.862 [2024-11-05 04:31:05.288257] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:51.862 request: 00:19:51.862 { 00:19:51.862 "name": "key0", 00:19:51.862 "path": "", 00:19:51.862 "method": "keyring_file_add_key", 00:19:51.862 "req_id": 1 00:19:51.862 } 00:19:51.862 Got JSON-RPC error response 00:19:51.862 response: 00:19:51.862 { 00:19:51.862 "code": -1, 00:19:51.862 "message": "Operation not permitted" 00:19:51.862 } 00:19:51.862 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:51.862 [2024-11-05 04:31:05.472780] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.862 [2024-11-05 04:31:05.472801] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:51.862 request: 00:19:51.862 { 00:19:51.862 "name": "TLSTEST", 00:19:51.862 "trtype": "tcp", 00:19:51.862 "traddr": "10.0.0.2", 00:19:51.862 "adrfam": "ipv4", 00:19:51.862 "trsvcid": "4420", 00:19:51.862 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:51.862 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:51.862 "prchk_reftag": false, 00:19:51.862 "prchk_guard": false, 00:19:51.862 "hdgst": false, 00:19:51.862 "ddgst": false, 00:19:51.862 "psk": "key0", 00:19:51.862 "allow_unrecognized_csi": false, 00:19:51.862 "method": "bdev_nvme_attach_controller", 00:19:51.862 "req_id": 1 00:19:51.862 } 00:19:51.862 Got JSON-RPC error response 00:19:51.862 response: 00:19:51.862 { 00:19:51.862 "code": -126, 00:19:51.862 "message": "Required key not available" 00:19:51.862 } 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3010577 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3010577 ']' 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3010577 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3010577 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3010577' 00:19:52.122 killing process with pid 3010577 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3010577 00:19:52.122 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.122 00:19:52.122 Latency(us) 00:19:52.122 [2024-11-05T03:31:05.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.122 [2024-11-05T03:31:05.762Z] =================================================================================================================== 00:19:52.122 [2024-11-05T03:31:05.762Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3010577 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 3005109 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3005109 ']' 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3005109 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3005109 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3005109' 00:19:52.122 killing process with pid 3005109 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3005109 00:19:52.122 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3005109 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.qzI2yU5L6n 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.qzI2yU5L6n 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:52.383 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3010709 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3010709 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3010709 ']' 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:52.384 04:31:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:52.384 [2024-11-05 04:31:05.958668] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:52.384 [2024-11-05 04:31:05.958731] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:52.644 [2024-11-05 04:31:06.047405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:52.644 [2024-11-05 04:31:06.077720] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:52.644 [2024-11-05 04:31:06.077754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:52.644 [2024-11-05 04:31:06.077760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:52.644 [2024-11-05 04:31:06.077765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:52.644 [2024-11-05 04:31:06.077769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:52.644 [2024-11-05 04:31:06.078214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.215 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:53.215 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:53.215 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:53.215 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.215 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.215 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.215 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.qzI2yU5L6n 00:19:53.215 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qzI2yU5L6n 00:19:53.215 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.476 [2024-11-05 04:31:06.938077] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.476 04:31:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:53.736 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:53.736 [2024-11-05 04:31:07.274914] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:53.736 [2024-11-05 04:31:07.275126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:53.736 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:53.996 malloc0 00:19:53.996 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.257 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qzI2yU5L6n 00:19:54.257 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qzI2yU5L6n 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qzI2yU5L6n 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3011109 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3011109 /var/tmp/bdevperf.sock 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3011109 ']' 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:54.518 04:31:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.518 [2024-11-05 04:31:08.029574] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:19:54.518 [2024-11-05 04:31:08.029630] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3011109 ] 00:19:54.518 [2024-11-05 04:31:08.086855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.518 [2024-11-05 04:31:08.115873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:54.778 04:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:54.778 04:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:19:54.778 04:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzI2yU5L6n 00:19:54.778 04:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:55.039 [2024-11-05 04:31:08.505013] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.039 TLSTESTn1 00:19:55.039 04:31:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:55.299 Running I/O for 10 seconds... 00:19:57.180 5442.00 IOPS, 21.26 MiB/s [2024-11-05T03:31:11.763Z] 6009.50 IOPS, 23.47 MiB/s [2024-11-05T03:31:12.705Z] 6017.33 IOPS, 23.51 MiB/s [2024-11-05T03:31:14.087Z] 6078.50 IOPS, 23.74 MiB/s [2024-11-05T03:31:15.027Z] 6016.20 IOPS, 23.50 MiB/s [2024-11-05T03:31:15.968Z] 5812.00 IOPS, 22.70 MiB/s [2024-11-05T03:31:16.909Z] 5671.71 IOPS, 22.16 MiB/s [2024-11-05T03:31:17.849Z] 5545.38 IOPS, 21.66 MiB/s [2024-11-05T03:31:18.789Z] 5435.33 IOPS, 21.23 MiB/s [2024-11-05T03:31:18.789Z] 5386.20 IOPS, 21.04 MiB/s 00:20:05.149 Latency(us) 00:20:05.149 [2024-11-05T03:31:18.789Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.149 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:05.149 Verification LBA range: start 0x0 length 0x2000 00:20:05.149 TLSTESTn1 : 10.02 5385.84 21.04 0.00 0.00 23724.60 5297.49 77769.39 00:20:05.149 [2024-11-05T03:31:18.789Z] =================================================================================================================== 00:20:05.149 [2024-11-05T03:31:18.789Z] Total : 5385.84 21.04 0.00 0.00 23724.60 5297.49 77769.39 00:20:05.149 { 00:20:05.149 "results": [ 00:20:05.149 { 00:20:05.149 "job": "TLSTESTn1", 00:20:05.149 "core_mask": "0x4", 00:20:05.149 "workload": "verify", 00:20:05.149 "status": "finished", 00:20:05.149 "verify_range": { 00:20:05.149 "start": 0, 00:20:05.149 "length": 8192 00:20:05.149 }, 00:20:05.149 "queue_depth": 128, 00:20:05.149 "io_size": 4096, 00:20:05.149 "runtime": 10.024244, 00:20:05.149 "iops": 5385.8425632895605, 00:20:05.149 "mibps": 21.038447512849846, 00:20:05.149 "io_failed": 0, 00:20:05.149 "io_timeout": 0, 00:20:05.149 "avg_latency_us": 23724.598121839634, 00:20:05.149 "min_latency_us": 5297.493333333333, 00:20:05.149 "max_latency_us": 77769.38666666667 00:20:05.149 } 00:20:05.149 ], 00:20:05.149 "core_count": 1 00:20:05.150 } 00:20:05.150 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:05.150 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3011109 00:20:05.150 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3011109 ']' 00:20:05.150 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3011109 00:20:05.150 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:05.150 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:05.150 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3011109 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3011109' 00:20:05.410 killing process with pid 3011109 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3011109 00:20:05.410 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.410 00:20:05.410 Latency(us) 00:20:05.410 [2024-11-05T03:31:19.050Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.410 [2024-11-05T03:31:19.050Z] =================================================================================================================== 00:20:05.410 [2024-11-05T03:31:19.050Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3011109 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.qzI2yU5L6n 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qzI2yU5L6n 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qzI2yU5L6n 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.qzI2yU5L6n 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.qzI2yU5L6n 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3013389 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3013389 /var/tmp/bdevperf.sock 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3013389 ']' 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:05.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:05.410 04:31:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:05.410 [2024-11-05 04:31:18.992701] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:05.410 [2024-11-05 04:31:18.992766] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3013389 ] 00:20:05.670 [2024-11-05 04:31:19.049985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:05.670 [2024-11-05 04:31:19.078504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.670 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:05.670 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:05.670 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzI2yU5L6n 00:20:05.931 [2024-11-05 04:31:19.311189] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qzI2yU5L6n': 0100666 00:20:05.931 [2024-11-05 04:31:19.311210] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:05.931 request: 00:20:05.931 { 00:20:05.931 "name": "key0", 00:20:05.931 "path": "/tmp/tmp.qzI2yU5L6n", 00:20:05.931 "method": "keyring_file_add_key", 00:20:05.931 "req_id": 1 00:20:05.931 } 00:20:05.931 Got JSON-RPC error response 00:20:05.931 response: 00:20:05.931 { 00:20:05.931 "code": -1, 00:20:05.931 "message": "Operation not permitted" 00:20:05.931 } 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:05.931 [2024-11-05 04:31:19.463644] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:05.931 [2024-11-05 04:31:19.463665] bdev_nvme.c:6529:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:20:05.931 request: 00:20:05.931 { 00:20:05.931 "name": "TLSTEST", 00:20:05.931 "trtype": "tcp", 00:20:05.931 "traddr": "10.0.0.2", 00:20:05.931 "adrfam": "ipv4", 00:20:05.931 "trsvcid": "4420", 00:20:05.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:05.931 "prchk_reftag": false, 00:20:05.931 "prchk_guard": false, 00:20:05.931 "hdgst": false, 00:20:05.931 "ddgst": false, 00:20:05.931 "psk": "key0", 00:20:05.931 "allow_unrecognized_csi": false, 00:20:05.931 "method": "bdev_nvme_attach_controller", 00:20:05.931 "req_id": 1 00:20:05.931 } 00:20:05.931 Got JSON-RPC error response 00:20:05.931 response: 00:20:05.931 { 00:20:05.931 "code": -126, 00:20:05.931 "message": "Required key not available" 00:20:05.931 } 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3013389 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3013389 ']' 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3013389 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3013389 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3013389' 00:20:05.931 killing process with pid 3013389 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3013389 00:20:05.931 Received shutdown signal, test time was about 10.000000 seconds 00:20:05.931 00:20:05.931 Latency(us) 00:20:05.931 [2024-11-05T03:31:19.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.931 [2024-11-05T03:31:19.571Z] =================================================================================================================== 00:20:05.931 [2024-11-05T03:31:19.571Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:05.931 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3013389 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 3010709 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3010709 ']' 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3010709 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3010709 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3010709' 00:20:06.191 killing process with pid 3010709 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3010709 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3010709 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3013427 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3013427 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3013427 ']' 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:06.191 04:31:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:06.451 [2024-11-05 04:31:19.869559] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:06.451 [2024-11-05 04:31:19.869620] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.451 [2024-11-05 04:31:19.960983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.451 [2024-11-05 04:31:19.989678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.451 [2024-11-05 04:31:19.989707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.451 [2024-11-05 04:31:19.989713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.451 [2024-11-05 04:31:19.989718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.451 [2024-11-05 04:31:19.989726] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.451 [2024-11-05 04:31:19.990154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.020 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:07.020 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.qzI2yU5L6n 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.qzI2yU5L6n 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.qzI2yU5L6n 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qzI2yU5L6n 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:07.280 [2024-11-05 04:31:20.859573] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.280 04:31:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:07.540 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:07.800 [2024-11-05 04:31:21.180364] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:07.800 [2024-11-05 04:31:21.180569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.800 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:07.800 malloc0 00:20:07.800 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:08.059 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qzI2yU5L6n 00:20:08.059 [2024-11-05 04:31:21.671433] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.qzI2yU5L6n': 0100666 00:20:08.059 [2024-11-05 04:31:21.671451] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:20:08.059 request: 00:20:08.059 { 00:20:08.059 "name": "key0", 00:20:08.059 "path": "/tmp/tmp.qzI2yU5L6n", 00:20:08.059 "method": "keyring_file_add_key", 00:20:08.059 "req_id": 1 00:20:08.059 } 00:20:08.059 Got JSON-RPC error response 00:20:08.059 response: 00:20:08.059 { 00:20:08.059 "code": -1, 00:20:08.059 "message": "Operation not permitted" 00:20:08.059 } 00:20:08.059 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:08.319 [2024-11-05 04:31:21.823827] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:20:08.319 [2024-11-05 04:31:21.823856] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:20:08.319 request: 00:20:08.319 { 00:20:08.319 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.319 "host": "nqn.2016-06.io.spdk:host1", 00:20:08.319 "psk": "key0", 00:20:08.319 "method": "nvmf_subsystem_add_host", 00:20:08.319 "req_id": 1 00:20:08.319 } 00:20:08.319 Got JSON-RPC error response 00:20:08.319 response: 00:20:08.319 { 00:20:08.319 "code": -32603, 00:20:08.319 "message": "Internal error" 00:20:08.319 } 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 3013427 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3013427 ']' 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3013427 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3013427 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3013427' 00:20:08.319 killing process with pid 3013427 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3013427 00:20:08.319 04:31:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3013427 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.qzI2yU5L6n 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3013932 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3013932 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3013932 ']' 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:08.579 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:08.579 [2024-11-05 04:31:22.085891] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:08.579 [2024-11-05 04:31:22.085952] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.579 [2024-11-05 04:31:22.174736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.579 [2024-11-05 04:31:22.203869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.579 [2024-11-05 04:31:22.203903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.579 [2024-11-05 04:31:22.203908] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.579 [2024-11-05 04:31:22.203913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.579 [2024-11-05 04:31:22.203918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.579 [2024-11-05 04:31:22.204381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.518 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:09.518 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:09.518 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.518 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:09.518 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:09.518 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.518 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.qzI2yU5L6n 00:20:09.518 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qzI2yU5L6n 00:20:09.518 04:31:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:09.518 [2024-11-05 04:31:23.051722] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:09.518 04:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:09.779 04:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:09.779 [2024-11-05 04:31:23.376520] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:09.779 [2024-11-05 04:31:23.376720] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.779 04:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:10.040 malloc0 00:20:10.040 04:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:10.300 04:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qzI2yU5L6n 00:20:10.300 04:31:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=3014445 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 3014445 /var/tmp/bdevperf.sock 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3014445 ']' 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:10.561 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:10.561 [2024-11-05 04:31:24.093119] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:10.561 [2024-11-05 04:31:24.093174] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014445 ] 00:20:10.561 [2024-11-05 04:31:24.153000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.561 [2024-11-05 04:31:24.182000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.822 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:10.822 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:10.822 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzI2yU5L6n 00:20:10.822 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:11.082 [2024-11-05 04:31:24.595172] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.082 TLSTESTn1 00:20:11.082 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:11.343 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:20:11.343 "subsystems": [ 00:20:11.343 { 00:20:11.343 "subsystem": "keyring", 00:20:11.343 "config": [ 00:20:11.343 { 00:20:11.343 "method": "keyring_file_add_key", 00:20:11.343 "params": { 00:20:11.343 "name": "key0", 00:20:11.343 "path": "/tmp/tmp.qzI2yU5L6n" 00:20:11.343 } 00:20:11.343 } 00:20:11.343 ] 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "subsystem": "iobuf", 00:20:11.343 "config": [ 00:20:11.343 { 00:20:11.343 "method": "iobuf_set_options", 00:20:11.343 "params": { 00:20:11.343 "small_pool_count": 8192, 00:20:11.343 "large_pool_count": 1024, 00:20:11.343 "small_bufsize": 8192, 00:20:11.343 "large_bufsize": 135168, 00:20:11.343 "enable_numa": false 00:20:11.343 } 00:20:11.343 } 00:20:11.343 ] 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "subsystem": "sock", 00:20:11.343 "config": [ 00:20:11.343 { 00:20:11.343 "method": "sock_set_default_impl", 00:20:11.343 "params": { 00:20:11.343 "impl_name": "posix" 00:20:11.343 } 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "method": "sock_impl_set_options", 00:20:11.343 "params": { 00:20:11.343 "impl_name": "ssl", 00:20:11.343 "recv_buf_size": 4096, 00:20:11.343 "send_buf_size": 4096, 00:20:11.343 "enable_recv_pipe": true, 00:20:11.343 "enable_quickack": false, 00:20:11.343 "enable_placement_id": 0, 00:20:11.343 "enable_zerocopy_send_server": true, 00:20:11.343 "enable_zerocopy_send_client": false, 00:20:11.343 "zerocopy_threshold": 0, 00:20:11.343 "tls_version": 0, 00:20:11.343 "enable_ktls": false 00:20:11.343 } 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "method": "sock_impl_set_options", 00:20:11.343 "params": { 00:20:11.343 "impl_name": "posix", 00:20:11.343 "recv_buf_size": 2097152, 00:20:11.343 "send_buf_size": 2097152, 00:20:11.343 "enable_recv_pipe": true, 00:20:11.343 "enable_quickack": false, 00:20:11.343 "enable_placement_id": 0, 00:20:11.343 "enable_zerocopy_send_server": true, 00:20:11.343 "enable_zerocopy_send_client": false, 00:20:11.343 "zerocopy_threshold": 0, 00:20:11.343 "tls_version": 0, 00:20:11.343 "enable_ktls": false 00:20:11.343 } 00:20:11.343 } 00:20:11.343 ] 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "subsystem": "vmd", 00:20:11.343 "config": [] 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "subsystem": "accel", 00:20:11.343 "config": [ 00:20:11.343 { 00:20:11.343 "method": "accel_set_options", 00:20:11.343 "params": { 00:20:11.343 "small_cache_size": 128, 00:20:11.343 "large_cache_size": 16, 00:20:11.343 "task_count": 2048, 00:20:11.343 "sequence_count": 2048, 00:20:11.343 "buf_count": 2048 00:20:11.343 } 00:20:11.343 } 00:20:11.343 ] 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "subsystem": "bdev", 00:20:11.343 "config": [ 00:20:11.343 { 00:20:11.343 "method": "bdev_set_options", 00:20:11.343 "params": { 00:20:11.343 "bdev_io_pool_size": 65535, 00:20:11.343 "bdev_io_cache_size": 256, 00:20:11.343 "bdev_auto_examine": true, 00:20:11.343 "iobuf_small_cache_size": 128, 00:20:11.343 "iobuf_large_cache_size": 16 00:20:11.343 } 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "method": "bdev_raid_set_options", 00:20:11.343 "params": { 00:20:11.343 "process_window_size_kb": 1024, 00:20:11.343 "process_max_bandwidth_mb_sec": 0 00:20:11.343 } 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "method": "bdev_iscsi_set_options", 00:20:11.343 "params": { 00:20:11.343 "timeout_sec": 30 00:20:11.343 } 00:20:11.343 }, 00:20:11.343 { 00:20:11.343 "method": "bdev_nvme_set_options", 00:20:11.343 "params": { 00:20:11.343 "action_on_timeout": "none", 00:20:11.343 "timeout_us": 0, 00:20:11.343 "timeout_admin_us": 0, 00:20:11.343 "keep_alive_timeout_ms": 10000, 00:20:11.343 "arbitration_burst": 0, 00:20:11.343 "low_priority_weight": 0, 00:20:11.343 "medium_priority_weight": 0, 00:20:11.343 "high_priority_weight": 0, 00:20:11.343 "nvme_adminq_poll_period_us": 10000, 00:20:11.343 "nvme_ioq_poll_period_us": 0, 00:20:11.343 "io_queue_requests": 0, 00:20:11.343 "delay_cmd_submit": true, 00:20:11.343 "transport_retry_count": 4, 00:20:11.343 "bdev_retry_count": 3, 00:20:11.343 "transport_ack_timeout": 0, 00:20:11.343 "ctrlr_loss_timeout_sec": 0, 00:20:11.343 "reconnect_delay_sec": 0, 00:20:11.343 "fast_io_fail_timeout_sec": 0, 00:20:11.343 "disable_auto_failback": false, 00:20:11.343 "generate_uuids": false, 00:20:11.343 "transport_tos": 0, 00:20:11.343 "nvme_error_stat": false, 00:20:11.343 "rdma_srq_size": 0, 00:20:11.343 "io_path_stat": false, 00:20:11.343 "allow_accel_sequence": false, 00:20:11.343 "rdma_max_cq_size": 0, 00:20:11.343 "rdma_cm_event_timeout_ms": 0, 00:20:11.343 "dhchap_digests": [ 00:20:11.343 "sha256", 00:20:11.343 "sha384", 00:20:11.343 "sha512" 00:20:11.343 ], 00:20:11.343 "dhchap_dhgroups": [ 00:20:11.343 "null", 00:20:11.343 "ffdhe2048", 00:20:11.343 "ffdhe3072", 00:20:11.343 "ffdhe4096", 00:20:11.343 "ffdhe6144", 00:20:11.343 "ffdhe8192" 00:20:11.344 ] 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "bdev_nvme_set_hotplug", 00:20:11.344 "params": { 00:20:11.344 "period_us": 100000, 00:20:11.344 "enable": false 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "bdev_malloc_create", 00:20:11.344 "params": { 00:20:11.344 "name": "malloc0", 00:20:11.344 "num_blocks": 8192, 00:20:11.344 "block_size": 4096, 00:20:11.344 "physical_block_size": 4096, 00:20:11.344 "uuid": "c574278e-694d-444f-9072-14a908436de2", 00:20:11.344 "optimal_io_boundary": 0, 00:20:11.344 "md_size": 0, 00:20:11.344 "dif_type": 0, 00:20:11.344 "dif_is_head_of_md": false, 00:20:11.344 "dif_pi_format": 0 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "bdev_wait_for_examine" 00:20:11.344 } 00:20:11.344 ] 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "subsystem": "nbd", 00:20:11.344 "config": [] 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "subsystem": "scheduler", 00:20:11.344 "config": [ 00:20:11.344 { 00:20:11.344 "method": "framework_set_scheduler", 00:20:11.344 "params": { 00:20:11.344 "name": "static" 00:20:11.344 } 00:20:11.344 } 00:20:11.344 ] 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "subsystem": "nvmf", 00:20:11.344 "config": [ 00:20:11.344 { 00:20:11.344 "method": "nvmf_set_config", 00:20:11.344 "params": { 00:20:11.344 "discovery_filter": "match_any", 00:20:11.344 "admin_cmd_passthru": { 00:20:11.344 "identify_ctrlr": false 00:20:11.344 }, 00:20:11.344 "dhchap_digests": [ 00:20:11.344 "sha256", 00:20:11.344 "sha384", 00:20:11.344 "sha512" 00:20:11.344 ], 00:20:11.344 "dhchap_dhgroups": [ 00:20:11.344 "null", 00:20:11.344 "ffdhe2048", 00:20:11.344 "ffdhe3072", 00:20:11.344 "ffdhe4096", 00:20:11.344 "ffdhe6144", 00:20:11.344 "ffdhe8192" 00:20:11.344 ] 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "nvmf_set_max_subsystems", 00:20:11.344 "params": { 00:20:11.344 "max_subsystems": 1024 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "nvmf_set_crdt", 00:20:11.344 "params": { 00:20:11.344 "crdt1": 0, 00:20:11.344 "crdt2": 0, 00:20:11.344 "crdt3": 0 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "nvmf_create_transport", 00:20:11.344 "params": { 00:20:11.344 "trtype": "TCP", 00:20:11.344 "max_queue_depth": 128, 00:20:11.344 "max_io_qpairs_per_ctrlr": 127, 00:20:11.344 "in_capsule_data_size": 4096, 00:20:11.344 "max_io_size": 131072, 00:20:11.344 "io_unit_size": 131072, 00:20:11.344 "max_aq_depth": 128, 00:20:11.344 "num_shared_buffers": 511, 00:20:11.344 "buf_cache_size": 4294967295, 00:20:11.344 "dif_insert_or_strip": false, 00:20:11.344 "zcopy": false, 00:20:11.344 "c2h_success": false, 00:20:11.344 "sock_priority": 0, 00:20:11.344 "abort_timeout_sec": 1, 00:20:11.344 "ack_timeout": 0, 00:20:11.344 "data_wr_pool_size": 0 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "nvmf_create_subsystem", 00:20:11.344 "params": { 00:20:11.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.344 "allow_any_host": false, 00:20:11.344 "serial_number": "SPDK00000000000001", 00:20:11.344 "model_number": "SPDK bdev Controller", 00:20:11.344 "max_namespaces": 10, 00:20:11.344 "min_cntlid": 1, 00:20:11.344 "max_cntlid": 65519, 00:20:11.344 "ana_reporting": false 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "nvmf_subsystem_add_host", 00:20:11.344 "params": { 00:20:11.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.344 "host": "nqn.2016-06.io.spdk:host1", 00:20:11.344 "psk": "key0" 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "nvmf_subsystem_add_ns", 00:20:11.344 "params": { 00:20:11.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.344 "namespace": { 00:20:11.344 "nsid": 1, 00:20:11.344 "bdev_name": "malloc0", 00:20:11.344 "nguid": "C574278E694D444F907214A908436DE2", 00:20:11.344 "uuid": "c574278e-694d-444f-9072-14a908436de2", 00:20:11.344 "no_auto_visible": false 00:20:11.344 } 00:20:11.344 } 00:20:11.344 }, 00:20:11.344 { 00:20:11.344 "method": "nvmf_subsystem_add_listener", 00:20:11.344 "params": { 00:20:11.344 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.344 "listen_address": { 00:20:11.344 "trtype": "TCP", 00:20:11.344 "adrfam": "IPv4", 00:20:11.344 "traddr": "10.0.0.2", 00:20:11.344 "trsvcid": "4420" 00:20:11.344 }, 00:20:11.344 "secure_channel": true 00:20:11.344 } 00:20:11.344 } 00:20:11.344 ] 00:20:11.344 } 00:20:11.344 ] 00:20:11.344 }' 00:20:11.344 04:31:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:11.605 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:20:11.605 "subsystems": [ 00:20:11.605 { 00:20:11.605 "subsystem": "keyring", 00:20:11.605 "config": [ 00:20:11.605 { 00:20:11.605 "method": "keyring_file_add_key", 00:20:11.605 "params": { 00:20:11.605 "name": "key0", 00:20:11.605 "path": "/tmp/tmp.qzI2yU5L6n" 00:20:11.605 } 00:20:11.605 } 00:20:11.605 ] 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "subsystem": "iobuf", 00:20:11.605 "config": [ 00:20:11.605 { 00:20:11.605 "method": "iobuf_set_options", 00:20:11.605 "params": { 00:20:11.605 "small_pool_count": 8192, 00:20:11.605 "large_pool_count": 1024, 00:20:11.605 "small_bufsize": 8192, 00:20:11.605 "large_bufsize": 135168, 00:20:11.605 "enable_numa": false 00:20:11.605 } 00:20:11.605 } 00:20:11.605 ] 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "subsystem": "sock", 00:20:11.605 "config": [ 00:20:11.605 { 00:20:11.605 "method": "sock_set_default_impl", 00:20:11.605 "params": { 00:20:11.605 "impl_name": "posix" 00:20:11.605 } 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "method": "sock_impl_set_options", 00:20:11.605 "params": { 00:20:11.605 "impl_name": "ssl", 00:20:11.605 "recv_buf_size": 4096, 00:20:11.605 "send_buf_size": 4096, 00:20:11.605 "enable_recv_pipe": true, 00:20:11.605 "enable_quickack": false, 00:20:11.605 "enable_placement_id": 0, 00:20:11.605 "enable_zerocopy_send_server": true, 00:20:11.605 "enable_zerocopy_send_client": false, 00:20:11.605 "zerocopy_threshold": 0, 00:20:11.605 "tls_version": 0, 00:20:11.605 "enable_ktls": false 00:20:11.605 } 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "method": "sock_impl_set_options", 00:20:11.605 "params": { 00:20:11.605 "impl_name": "posix", 00:20:11.605 "recv_buf_size": 2097152, 00:20:11.605 "send_buf_size": 2097152, 00:20:11.605 "enable_recv_pipe": true, 00:20:11.605 "enable_quickack": false, 00:20:11.605 "enable_placement_id": 0, 00:20:11.605 "enable_zerocopy_send_server": true, 00:20:11.605 "enable_zerocopy_send_client": false, 00:20:11.605 "zerocopy_threshold": 0, 00:20:11.605 "tls_version": 0, 00:20:11.605 "enable_ktls": false 00:20:11.605 } 00:20:11.605 } 00:20:11.605 ] 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "subsystem": "vmd", 00:20:11.605 "config": [] 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "subsystem": "accel", 00:20:11.605 "config": [ 00:20:11.605 { 00:20:11.605 "method": "accel_set_options", 00:20:11.605 "params": { 00:20:11.605 "small_cache_size": 128, 00:20:11.605 "large_cache_size": 16, 00:20:11.605 "task_count": 2048, 00:20:11.605 "sequence_count": 2048, 00:20:11.605 "buf_count": 2048 00:20:11.605 } 00:20:11.605 } 00:20:11.605 ] 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "subsystem": "bdev", 00:20:11.605 "config": [ 00:20:11.605 { 00:20:11.605 "method": "bdev_set_options", 00:20:11.605 "params": { 00:20:11.605 "bdev_io_pool_size": 65535, 00:20:11.605 "bdev_io_cache_size": 256, 00:20:11.605 "bdev_auto_examine": true, 00:20:11.605 "iobuf_small_cache_size": 128, 00:20:11.605 "iobuf_large_cache_size": 16 00:20:11.605 } 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "method": "bdev_raid_set_options", 00:20:11.605 "params": { 00:20:11.605 "process_window_size_kb": 1024, 00:20:11.605 "process_max_bandwidth_mb_sec": 0 00:20:11.605 } 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "method": "bdev_iscsi_set_options", 00:20:11.605 "params": { 00:20:11.605 "timeout_sec": 30 00:20:11.605 } 00:20:11.605 }, 00:20:11.605 { 00:20:11.605 "method": "bdev_nvme_set_options", 00:20:11.605 "params": { 00:20:11.605 "action_on_timeout": "none", 00:20:11.605 "timeout_us": 0, 00:20:11.605 "timeout_admin_us": 0, 00:20:11.605 "keep_alive_timeout_ms": 10000, 00:20:11.605 "arbitration_burst": 0, 00:20:11.605 "low_priority_weight": 0, 00:20:11.605 "medium_priority_weight": 0, 00:20:11.605 "high_priority_weight": 0, 00:20:11.605 "nvme_adminq_poll_period_us": 10000, 00:20:11.605 "nvme_ioq_poll_period_us": 0, 00:20:11.605 "io_queue_requests": 512, 00:20:11.605 "delay_cmd_submit": true, 00:20:11.605 "transport_retry_count": 4, 00:20:11.605 "bdev_retry_count": 3, 00:20:11.605 "transport_ack_timeout": 0, 00:20:11.605 "ctrlr_loss_timeout_sec": 0, 00:20:11.605 "reconnect_delay_sec": 0, 00:20:11.605 "fast_io_fail_timeout_sec": 0, 00:20:11.605 "disable_auto_failback": false, 00:20:11.605 "generate_uuids": false, 00:20:11.605 "transport_tos": 0, 00:20:11.605 "nvme_error_stat": false, 00:20:11.605 "rdma_srq_size": 0, 00:20:11.605 "io_path_stat": false, 00:20:11.605 "allow_accel_sequence": false, 00:20:11.605 "rdma_max_cq_size": 0, 00:20:11.605 "rdma_cm_event_timeout_ms": 0, 00:20:11.605 "dhchap_digests": [ 00:20:11.605 "sha256", 00:20:11.605 "sha384", 00:20:11.605 "sha512" 00:20:11.605 ], 00:20:11.605 "dhchap_dhgroups": [ 00:20:11.605 "null", 00:20:11.605 "ffdhe2048", 00:20:11.605 "ffdhe3072", 00:20:11.605 "ffdhe4096", 00:20:11.605 "ffdhe6144", 00:20:11.606 "ffdhe8192" 00:20:11.606 ] 00:20:11.606 } 00:20:11.606 }, 00:20:11.606 { 00:20:11.606 "method": "bdev_nvme_attach_controller", 00:20:11.606 "params": { 00:20:11.606 "name": "TLSTEST", 00:20:11.606 "trtype": "TCP", 00:20:11.606 "adrfam": "IPv4", 00:20:11.606 "traddr": "10.0.0.2", 00:20:11.606 "trsvcid": "4420", 00:20:11.606 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.606 "prchk_reftag": false, 00:20:11.606 "prchk_guard": false, 00:20:11.606 "ctrlr_loss_timeout_sec": 0, 00:20:11.606 "reconnect_delay_sec": 0, 00:20:11.606 "fast_io_fail_timeout_sec": 0, 00:20:11.606 "psk": "key0", 00:20:11.606 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.606 "hdgst": false, 00:20:11.606 "ddgst": false, 00:20:11.606 "multipath": "multipath" 00:20:11.606 } 00:20:11.606 }, 00:20:11.606 { 00:20:11.606 "method": "bdev_nvme_set_hotplug", 00:20:11.606 "params": { 00:20:11.606 "period_us": 100000, 00:20:11.606 "enable": false 00:20:11.606 } 00:20:11.606 }, 00:20:11.606 { 00:20:11.606 "method": "bdev_wait_for_examine" 00:20:11.606 } 00:20:11.606 ] 00:20:11.606 }, 00:20:11.606 { 00:20:11.606 "subsystem": "nbd", 00:20:11.606 "config": [] 00:20:11.606 } 00:20:11.606 ] 00:20:11.606 }' 00:20:11.606 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 3014445 00:20:11.606 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3014445 ']' 00:20:11.606 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3014445 00:20:11.606 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:11.606 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:11.606 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3014445 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3014445' 00:20:11.866 killing process with pid 3014445 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3014445 00:20:11.866 Received shutdown signal, test time was about 10.000000 seconds 00:20:11.866 00:20:11.866 Latency(us) 00:20:11.866 [2024-11-05T03:31:25.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:11.866 [2024-11-05T03:31:25.506Z] =================================================================================================================== 00:20:11.866 [2024-11-05T03:31:25.506Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3014445 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 3013932 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3013932 ']' 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3013932 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3013932 00:20:11.866 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:11.867 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:11.867 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3013932' 00:20:11.867 killing process with pid 3013932 00:20:11.867 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3013932 00:20:11.867 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3013932 00:20:12.128 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:12.128 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:12.128 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:12.128 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.128 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:20:12.128 "subsystems": [ 00:20:12.128 { 00:20:12.128 "subsystem": "keyring", 00:20:12.128 "config": [ 00:20:12.128 { 00:20:12.128 "method": "keyring_file_add_key", 00:20:12.128 "params": { 00:20:12.128 "name": "key0", 00:20:12.128 "path": "/tmp/tmp.qzI2yU5L6n" 00:20:12.128 } 00:20:12.128 } 00:20:12.128 ] 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "subsystem": "iobuf", 00:20:12.128 "config": [ 00:20:12.128 { 00:20:12.128 "method": "iobuf_set_options", 00:20:12.128 "params": { 00:20:12.128 "small_pool_count": 8192, 00:20:12.128 "large_pool_count": 1024, 00:20:12.128 "small_bufsize": 8192, 00:20:12.128 "large_bufsize": 135168, 00:20:12.128 "enable_numa": false 00:20:12.128 } 00:20:12.128 } 00:20:12.128 ] 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "subsystem": "sock", 00:20:12.128 "config": [ 00:20:12.128 { 00:20:12.128 "method": "sock_set_default_impl", 00:20:12.128 "params": { 00:20:12.128 "impl_name": "posix" 00:20:12.128 } 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "method": "sock_impl_set_options", 00:20:12.128 "params": { 00:20:12.128 "impl_name": "ssl", 00:20:12.128 "recv_buf_size": 4096, 00:20:12.128 "send_buf_size": 4096, 00:20:12.128 "enable_recv_pipe": true, 00:20:12.128 "enable_quickack": false, 00:20:12.128 "enable_placement_id": 0, 00:20:12.128 "enable_zerocopy_send_server": true, 00:20:12.128 "enable_zerocopy_send_client": false, 00:20:12.128 "zerocopy_threshold": 0, 00:20:12.128 "tls_version": 0, 00:20:12.128 "enable_ktls": false 00:20:12.128 } 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "method": "sock_impl_set_options", 00:20:12.128 "params": { 00:20:12.128 "impl_name": "posix", 00:20:12.128 "recv_buf_size": 2097152, 00:20:12.128 "send_buf_size": 2097152, 00:20:12.128 "enable_recv_pipe": true, 00:20:12.128 "enable_quickack": false, 00:20:12.128 "enable_placement_id": 0, 00:20:12.128 "enable_zerocopy_send_server": true, 00:20:12.128 "enable_zerocopy_send_client": false, 00:20:12.128 "zerocopy_threshold": 0, 00:20:12.128 "tls_version": 0, 00:20:12.128 "enable_ktls": false 00:20:12.128 } 00:20:12.128 } 00:20:12.128 ] 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "subsystem": "vmd", 00:20:12.128 "config": [] 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "subsystem": "accel", 00:20:12.128 "config": [ 00:20:12.128 { 00:20:12.128 "method": "accel_set_options", 00:20:12.128 "params": { 00:20:12.128 "small_cache_size": 128, 00:20:12.128 "large_cache_size": 16, 00:20:12.128 "task_count": 2048, 00:20:12.128 "sequence_count": 2048, 00:20:12.128 "buf_count": 2048 00:20:12.128 } 00:20:12.128 } 00:20:12.128 ] 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "subsystem": "bdev", 00:20:12.128 "config": [ 00:20:12.128 { 00:20:12.128 "method": "bdev_set_options", 00:20:12.128 "params": { 00:20:12.128 "bdev_io_pool_size": 65535, 00:20:12.128 "bdev_io_cache_size": 256, 00:20:12.128 "bdev_auto_examine": true, 00:20:12.128 "iobuf_small_cache_size": 128, 00:20:12.128 "iobuf_large_cache_size": 16 00:20:12.128 } 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "method": "bdev_raid_set_options", 00:20:12.128 "params": { 00:20:12.128 "process_window_size_kb": 1024, 00:20:12.128 "process_max_bandwidth_mb_sec": 0 00:20:12.128 } 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "method": "bdev_iscsi_set_options", 00:20:12.128 "params": { 00:20:12.128 "timeout_sec": 30 00:20:12.128 } 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "method": "bdev_nvme_set_options", 00:20:12.128 "params": { 00:20:12.128 "action_on_timeout": "none", 00:20:12.128 "timeout_us": 0, 00:20:12.128 "timeout_admin_us": 0, 00:20:12.128 "keep_alive_timeout_ms": 10000, 00:20:12.128 "arbitration_burst": 0, 00:20:12.128 "low_priority_weight": 0, 00:20:12.128 "medium_priority_weight": 0, 00:20:12.128 "high_priority_weight": 0, 00:20:12.128 "nvme_adminq_poll_period_us": 10000, 00:20:12.128 "nvme_ioq_poll_period_us": 0, 00:20:12.128 "io_queue_requests": 0, 00:20:12.128 "delay_cmd_submit": true, 00:20:12.128 "transport_retry_count": 4, 00:20:12.128 "bdev_retry_count": 3, 00:20:12.128 "transport_ack_timeout": 0, 00:20:12.128 "ctrlr_loss_timeout_sec": 0, 00:20:12.128 "reconnect_delay_sec": 0, 00:20:12.128 "fast_io_fail_timeout_sec": 0, 00:20:12.128 "disable_auto_failback": false, 00:20:12.128 "generate_uuids": false, 00:20:12.128 "transport_tos": 0, 00:20:12.128 "nvme_error_stat": false, 00:20:12.128 "rdma_srq_size": 0, 00:20:12.128 "io_path_stat": false, 00:20:12.128 "allow_accel_sequence": false, 00:20:12.128 "rdma_max_cq_size": 0, 00:20:12.128 "rdma_cm_event_timeout_ms": 0, 00:20:12.128 "dhchap_digests": [ 00:20:12.128 "sha256", 00:20:12.128 "sha384", 00:20:12.128 "sha512" 00:20:12.128 ], 00:20:12.128 "dhchap_dhgroups": [ 00:20:12.128 "null", 00:20:12.128 "ffdhe2048", 00:20:12.128 "ffdhe3072", 00:20:12.128 "ffdhe4096", 00:20:12.128 "ffdhe6144", 00:20:12.128 "ffdhe8192" 00:20:12.128 ] 00:20:12.128 } 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "method": "bdev_nvme_set_hotplug", 00:20:12.128 "params": { 00:20:12.128 "period_us": 100000, 00:20:12.128 "enable": false 00:20:12.128 } 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "method": "bdev_malloc_create", 00:20:12.128 "params": { 00:20:12.128 "name": "malloc0", 00:20:12.128 "num_blocks": 8192, 00:20:12.128 "block_size": 4096, 00:20:12.128 "physical_block_size": 4096, 00:20:12.128 "uuid": "c574278e-694d-444f-9072-14a908436de2", 00:20:12.128 "optimal_io_boundary": 0, 00:20:12.128 "md_size": 0, 00:20:12.128 "dif_type": 0, 00:20:12.128 "dif_is_head_of_md": false, 00:20:12.128 "dif_pi_format": 0 00:20:12.128 } 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "method": "bdev_wait_for_examine" 00:20:12.128 } 00:20:12.128 ] 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "subsystem": "nbd", 00:20:12.128 "config": [] 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "subsystem": "scheduler", 00:20:12.128 "config": [ 00:20:12.128 { 00:20:12.128 "method": "framework_set_scheduler", 00:20:12.128 "params": { 00:20:12.128 "name": "static" 00:20:12.128 } 00:20:12.128 } 00:20:12.128 ] 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "subsystem": "nvmf", 00:20:12.128 "config": [ 00:20:12.128 { 00:20:12.128 "method": "nvmf_set_config", 00:20:12.128 "params": { 00:20:12.128 "discovery_filter": "match_any", 00:20:12.128 "admin_cmd_passthru": { 00:20:12.128 "identify_ctrlr": false 00:20:12.128 }, 00:20:12.128 "dhchap_digests": [ 00:20:12.128 "sha256", 00:20:12.128 "sha384", 00:20:12.128 "sha512" 00:20:12.128 ], 00:20:12.128 "dhchap_dhgroups": [ 00:20:12.128 "null", 00:20:12.128 "ffdhe2048", 00:20:12.128 "ffdhe3072", 00:20:12.128 "ffdhe4096", 00:20:12.128 "ffdhe6144", 00:20:12.128 "ffdhe8192" 00:20:12.128 ] 00:20:12.128 } 00:20:12.128 }, 00:20:12.128 { 00:20:12.128 "method": "nvmf_set_max_subsystems", 00:20:12.129 "params": { 00:20:12.129 "max_subsystems": 1024 00:20:12.129 } 00:20:12.129 }, 00:20:12.129 { 00:20:12.129 "method": "nvmf_set_crdt", 00:20:12.129 "params": { 00:20:12.129 "crdt1": 0, 00:20:12.129 "crdt2": 0, 00:20:12.129 "crdt3": 0 00:20:12.129 } 00:20:12.129 }, 00:20:12.129 { 00:20:12.129 "method": "nvmf_create_transport", 00:20:12.129 "params": { 00:20:12.129 "trtype": "TCP", 00:20:12.129 "max_queue_depth": 128, 00:20:12.129 "max_io_qpairs_per_ctrlr": 127, 00:20:12.129 "in_capsule_data_size": 4096, 00:20:12.129 "max_io_size": 131072, 00:20:12.129 "io_unit_size": 131072, 00:20:12.129 "max_aq_depth": 128, 00:20:12.129 "num_shared_buffers": 511, 00:20:12.129 "buf_cache_size": 4294967295, 00:20:12.129 "dif_insert_or_strip": false, 00:20:12.129 "zcopy": false, 00:20:12.129 "c2h_success": false, 00:20:12.129 "sock_priority": 0, 00:20:12.129 "abort_timeout_sec": 1, 00:20:12.129 "ack_timeout": 0, 00:20:12.129 "data_wr_pool_size": 0 00:20:12.129 } 00:20:12.129 }, 00:20:12.129 { 00:20:12.129 "method": "nvmf_create_subsystem", 00:20:12.129 "params": { 00:20:12.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.129 "allow_any_host": false, 00:20:12.129 "serial_number": "SPDK00000000000001", 00:20:12.129 "model_number": "SPDK bdev Controller", 00:20:12.129 "max_namespaces": 10, 00:20:12.129 "min_cntlid": 1, 00:20:12.129 "max_cntlid": 65519, 00:20:12.129 "ana_reporting": false 00:20:12.129 } 00:20:12.129 }, 00:20:12.129 { 00:20:12.129 "method": "nvmf_subsystem_add_host", 00:20:12.129 "params": { 00:20:12.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.129 "host": "nqn.2016-06.io.spdk:host1", 00:20:12.129 "psk": "key0" 00:20:12.129 } 00:20:12.129 }, 00:20:12.129 { 00:20:12.129 "method": "nvmf_subsystem_add_ns", 00:20:12.129 "params": { 00:20:12.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.129 "namespace": { 00:20:12.129 "nsid": 1, 00:20:12.129 "bdev_name": "malloc0", 00:20:12.129 "nguid": "C574278E694D444F907214A908436DE2", 00:20:12.129 "uuid": "c574278e-694d-444f-9072-14a908436de2", 00:20:12.129 "no_auto_visible": false 00:20:12.129 } 00:20:12.129 } 00:20:12.129 }, 00:20:12.129 { 00:20:12.129 "method": "nvmf_subsystem_add_listener", 00:20:12.129 "params": { 00:20:12.129 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.129 "listen_address": { 00:20:12.129 "trtype": "TCP", 00:20:12.129 "adrfam": "IPv4", 00:20:12.129 "traddr": "10.0.0.2", 00:20:12.129 "trsvcid": "4420" 00:20:12.129 }, 00:20:12.129 "secure_channel": true 00:20:12.129 } 00:20:12.129 } 00:20:12.129 ] 00:20:12.129 } 00:20:12.129 ] 00:20:12.129 }' 00:20:12.129 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3014682 00:20:12.129 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3014682 00:20:12.129 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:12.129 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3014682 ']' 00:20:12.129 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:12.129 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:12.129 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:12.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:12.129 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:12.129 04:31:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.129 [2024-11-05 04:31:25.610079] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:12.129 [2024-11-05 04:31:25.610138] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:12.129 [2024-11-05 04:31:25.700917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.129 [2024-11-05 04:31:25.729799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:12.129 [2024-11-05 04:31:25.729828] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:12.129 [2024-11-05 04:31:25.729833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:12.129 [2024-11-05 04:31:25.729838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:12.129 [2024-11-05 04:31:25.729842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:12.129 [2024-11-05 04:31:25.730311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:12.389 [2024-11-05 04:31:25.922629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:12.389 [2024-11-05 04:31:25.954659] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:12.389 [2024-11-05 04:31:25.954895] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=3014866 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 3014866 /var/tmp/bdevperf.sock 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3014866 ']' 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:12.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:12.960 04:31:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:20:12.961 "subsystems": [ 00:20:12.961 { 00:20:12.961 "subsystem": "keyring", 00:20:12.961 "config": [ 00:20:12.961 { 00:20:12.961 "method": "keyring_file_add_key", 00:20:12.961 "params": { 00:20:12.961 "name": "key0", 00:20:12.961 "path": "/tmp/tmp.qzI2yU5L6n" 00:20:12.961 } 00:20:12.961 } 00:20:12.961 ] 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "subsystem": "iobuf", 00:20:12.961 "config": [ 00:20:12.961 { 00:20:12.961 "method": "iobuf_set_options", 00:20:12.961 "params": { 00:20:12.961 "small_pool_count": 8192, 00:20:12.961 "large_pool_count": 1024, 00:20:12.961 "small_bufsize": 8192, 00:20:12.961 "large_bufsize": 135168, 00:20:12.961 "enable_numa": false 00:20:12.961 } 00:20:12.961 } 00:20:12.961 ] 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "subsystem": "sock", 00:20:12.961 "config": [ 00:20:12.961 { 00:20:12.961 "method": "sock_set_default_impl", 00:20:12.961 "params": { 00:20:12.961 "impl_name": "posix" 00:20:12.961 } 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "method": "sock_impl_set_options", 00:20:12.961 "params": { 00:20:12.961 "impl_name": "ssl", 00:20:12.961 "recv_buf_size": 4096, 00:20:12.961 "send_buf_size": 4096, 00:20:12.961 "enable_recv_pipe": true, 00:20:12.961 "enable_quickack": false, 00:20:12.961 "enable_placement_id": 0, 00:20:12.961 "enable_zerocopy_send_server": true, 00:20:12.961 "enable_zerocopy_send_client": false, 00:20:12.961 "zerocopy_threshold": 0, 00:20:12.961 "tls_version": 0, 00:20:12.961 "enable_ktls": false 00:20:12.961 } 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "method": "sock_impl_set_options", 00:20:12.961 "params": { 00:20:12.961 "impl_name": "posix", 00:20:12.961 "recv_buf_size": 2097152, 00:20:12.961 "send_buf_size": 2097152, 00:20:12.961 "enable_recv_pipe": true, 00:20:12.961 "enable_quickack": false, 00:20:12.961 "enable_placement_id": 0, 00:20:12.961 "enable_zerocopy_send_server": true, 00:20:12.961 "enable_zerocopy_send_client": false, 00:20:12.961 "zerocopy_threshold": 0, 00:20:12.961 "tls_version": 0, 00:20:12.961 "enable_ktls": false 00:20:12.961 } 00:20:12.961 } 00:20:12.961 ] 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "subsystem": "vmd", 00:20:12.961 "config": [] 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "subsystem": "accel", 00:20:12.961 "config": [ 00:20:12.961 { 00:20:12.961 "method": "accel_set_options", 00:20:12.961 "params": { 00:20:12.961 "small_cache_size": 128, 00:20:12.961 "large_cache_size": 16, 00:20:12.961 "task_count": 2048, 00:20:12.961 "sequence_count": 2048, 00:20:12.961 "buf_count": 2048 00:20:12.961 } 00:20:12.961 } 00:20:12.961 ] 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "subsystem": "bdev", 00:20:12.961 "config": [ 00:20:12.961 { 00:20:12.961 "method": "bdev_set_options", 00:20:12.961 "params": { 00:20:12.961 "bdev_io_pool_size": 65535, 00:20:12.961 "bdev_io_cache_size": 256, 00:20:12.961 "bdev_auto_examine": true, 00:20:12.961 "iobuf_small_cache_size": 128, 00:20:12.961 "iobuf_large_cache_size": 16 00:20:12.961 } 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "method": "bdev_raid_set_options", 00:20:12.961 "params": { 00:20:12.961 "process_window_size_kb": 1024, 00:20:12.961 "process_max_bandwidth_mb_sec": 0 00:20:12.961 } 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "method": "bdev_iscsi_set_options", 00:20:12.961 "params": { 00:20:12.961 "timeout_sec": 30 00:20:12.961 } 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "method": "bdev_nvme_set_options", 00:20:12.961 "params": { 00:20:12.961 "action_on_timeout": "none", 00:20:12.961 "timeout_us": 0, 00:20:12.961 "timeout_admin_us": 0, 00:20:12.961 "keep_alive_timeout_ms": 10000, 00:20:12.961 "arbitration_burst": 0, 00:20:12.961 "low_priority_weight": 0, 00:20:12.961 "medium_priority_weight": 0, 00:20:12.961 "high_priority_weight": 0, 00:20:12.961 "nvme_adminq_poll_period_us": 10000, 00:20:12.961 "nvme_ioq_poll_period_us": 0, 00:20:12.961 "io_queue_requests": 512, 00:20:12.961 "delay_cmd_submit": true, 00:20:12.961 "transport_retry_count": 4, 00:20:12.961 "bdev_retry_count": 3, 00:20:12.961 "transport_ack_timeout": 0, 00:20:12.961 "ctrlr_loss_timeout_sec": 0, 00:20:12.961 "reconnect_delay_sec": 0, 00:20:12.961 "fast_io_fail_timeout_sec": 0, 00:20:12.961 "disable_auto_failback": false, 00:20:12.961 "generate_uuids": false, 00:20:12.961 "transport_tos": 0, 00:20:12.961 "nvme_error_stat": false, 00:20:12.961 "rdma_srq_size": 0, 00:20:12.961 "io_path_stat": false, 00:20:12.961 "allow_accel_sequence": false, 00:20:12.961 "rdma_max_cq_size": 0, 00:20:12.961 "rdma_cm_event_timeout_ms": 0, 00:20:12.961 "dhchap_digests": [ 00:20:12.961 "sha256", 00:20:12.961 "sha384", 00:20:12.961 "sha512" 00:20:12.961 ], 00:20:12.961 "dhchap_dhgroups": [ 00:20:12.961 "null", 00:20:12.961 "ffdhe2048", 00:20:12.961 "ffdhe3072", 00:20:12.961 "ffdhe4096", 00:20:12.961 "ffdhe6144", 00:20:12.961 "ffdhe8192" 00:20:12.961 ] 00:20:12.961 } 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "method": "bdev_nvme_attach_controller", 00:20:12.961 "params": { 00:20:12.961 "name": "TLSTEST", 00:20:12.961 "trtype": "TCP", 00:20:12.961 "adrfam": "IPv4", 00:20:12.961 "traddr": "10.0.0.2", 00:20:12.961 "trsvcid": "4420", 00:20:12.961 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:12.961 "prchk_reftag": false, 00:20:12.961 "prchk_guard": false, 00:20:12.961 "ctrlr_loss_timeout_sec": 0, 00:20:12.961 "reconnect_delay_sec": 0, 00:20:12.961 "fast_io_fail_timeout_sec": 0, 00:20:12.961 "psk": "key0", 00:20:12.961 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:12.961 "hdgst": false, 00:20:12.961 "ddgst": false, 00:20:12.961 "multipath": "multipath" 00:20:12.961 } 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "method": "bdev_nvme_set_hotplug", 00:20:12.961 "params": { 00:20:12.961 "period_us": 100000, 00:20:12.961 "enable": false 00:20:12.961 } 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "method": "bdev_wait_for_examine" 00:20:12.961 } 00:20:12.961 ] 00:20:12.961 }, 00:20:12.961 { 00:20:12.961 "subsystem": "nbd", 00:20:12.961 "config": [] 00:20:12.961 } 00:20:12.961 ] 00:20:12.961 }' 00:20:12.961 [2024-11-05 04:31:26.492469] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:12.961 [2024-11-05 04:31:26.492526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3014866 ] 00:20:12.961 [2024-11-05 04:31:26.551079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:12.961 [2024-11-05 04:31:26.580215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.222 [2024-11-05 04:31:26.714029] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:13.792 04:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:13.792 04:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:13.792 04:31:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:13.792 Running I/O for 10 seconds... 00:20:15.745 5759.00 IOPS, 22.50 MiB/s [2024-11-05T03:31:30.765Z] 6225.50 IOPS, 24.32 MiB/s [2024-11-05T03:31:31.705Z] 6242.33 IOPS, 24.38 MiB/s [2024-11-05T03:31:32.648Z] 6352.75 IOPS, 24.82 MiB/s [2024-11-05T03:31:33.588Z] 6333.60 IOPS, 24.74 MiB/s [2024-11-05T03:31:34.582Z] 6359.33 IOPS, 24.84 MiB/s [2024-11-05T03:31:35.599Z] 6304.43 IOPS, 24.63 MiB/s [2024-11-05T03:31:36.542Z] 6161.00 IOPS, 24.07 MiB/s [2024-11-05T03:31:37.483Z] 5995.89 IOPS, 23.42 MiB/s [2024-11-05T03:31:37.483Z] 5885.60 IOPS, 22.99 MiB/s 00:20:23.843 Latency(us) 00:20:23.843 [2024-11-05T03:31:37.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:23.843 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:23.843 Verification LBA range: start 0x0 length 0x2000 00:20:23.843 TLSTESTn1 : 10.01 5890.01 23.01 0.00 0.00 21701.75 5297.49 61166.93 00:20:23.843 [2024-11-05T03:31:37.483Z] =================================================================================================================== 00:20:23.843 [2024-11-05T03:31:37.483Z] Total : 5890.01 23.01 0.00 0.00 21701.75 5297.49 61166.93 00:20:23.843 { 00:20:23.843 "results": [ 00:20:23.843 { 00:20:23.843 "job": "TLSTESTn1", 00:20:23.843 "core_mask": "0x4", 00:20:23.843 "workload": "verify", 00:20:23.843 "status": "finished", 00:20:23.843 "verify_range": { 00:20:23.843 "start": 0, 00:20:23.843 "length": 8192 00:20:23.843 }, 00:20:23.843 "queue_depth": 128, 00:20:23.843 "io_size": 4096, 00:20:23.843 "runtime": 10.014075, 00:20:23.843 "iops": 5890.009811190749, 00:20:23.843 "mibps": 23.007850824963864, 00:20:23.844 "io_failed": 0, 00:20:23.844 "io_timeout": 0, 00:20:23.844 "avg_latency_us": 21701.75076615296, 00:20:23.844 "min_latency_us": 5297.493333333333, 00:20:23.844 "max_latency_us": 61166.933333333334 00:20:23.844 } 00:20:23.844 ], 00:20:23.844 "core_count": 1 00:20:23.844 } 00:20:23.844 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:23.844 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 3014866 00:20:23.844 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3014866 ']' 00:20:23.844 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3014866 00:20:23.844 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:23.844 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:23.844 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3014866 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3014866' 00:20:24.104 killing process with pid 3014866 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3014866 00:20:24.104 Received shutdown signal, test time was about 10.000000 seconds 00:20:24.104 00:20:24.104 Latency(us) 00:20:24.104 [2024-11-05T03:31:37.744Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:24.104 [2024-11-05T03:31:37.744Z] =================================================================================================================== 00:20:24.104 [2024-11-05T03:31:37.744Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3014866 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 3014682 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3014682 ']' 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3014682 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3014682 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3014682' 00:20:24.104 killing process with pid 3014682 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3014682 00:20:24.104 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3014682 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3017089 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3017089 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3017089 ']' 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:24.366 04:31:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:24.366 [2024-11-05 04:31:37.838547] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:24.366 [2024-11-05 04:31:37.838606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:24.366 [2024-11-05 04:31:37.915063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:24.366 [2024-11-05 04:31:37.949124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:24.366 [2024-11-05 04:31:37.949161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:24.366 [2024-11-05 04:31:37.949169] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:24.366 [2024-11-05 04:31:37.949175] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:24.366 [2024-11-05 04:31:37.949181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:24.366 [2024-11-05 04:31:37.949761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.qzI2yU5L6n 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.qzI2yU5L6n 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:25.309 [2024-11-05 04:31:38.830244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:25.309 04:31:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:25.570 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:25.570 [2024-11-05 04:31:39.199166] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.570 [2024-11-05 04:31:39.199397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.831 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:25.831 malloc0 00:20:25.831 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:26.103 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.qzI2yU5L6n 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=3017591 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 3017591 /var/tmp/bdevperf.sock 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3017591 ']' 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:26.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:26.372 04:31:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:26.372 [2024-11-05 04:31:39.989176] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:26.372 [2024-11-05 04:31:39.989229] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3017591 ] 00:20:26.634 [2024-11-05 04:31:40.073671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.634 [2024-11-05 04:31:40.103104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:27.206 04:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:27.206 04:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:27.206 04:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzI2yU5L6n 00:20:27.467 04:31:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:27.467 [2024-11-05 04:31:41.098707] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.728 nvme0n1 00:20:27.728 04:31:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:27.728 Running I/O for 1 seconds... 00:20:28.670 4760.00 IOPS, 18.59 MiB/s 00:20:28.670 Latency(us) 00:20:28.670 [2024-11-05T03:31:42.310Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.670 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:28.670 Verification LBA range: start 0x0 length 0x2000 00:20:28.670 nvme0n1 : 1.01 4815.86 18.81 0.00 0.00 26409.82 4587.52 79517.01 00:20:28.670 [2024-11-05T03:31:42.310Z] =================================================================================================================== 00:20:28.670 [2024-11-05T03:31:42.310Z] Total : 4815.86 18.81 0.00 0.00 26409.82 4587.52 79517.01 00:20:28.670 { 00:20:28.670 "results": [ 00:20:28.670 { 00:20:28.670 "job": "nvme0n1", 00:20:28.670 "core_mask": "0x2", 00:20:28.670 "workload": "verify", 00:20:28.670 "status": "finished", 00:20:28.670 "verify_range": { 00:20:28.670 "start": 0, 00:20:28.670 "length": 8192 00:20:28.670 }, 00:20:28.670 "queue_depth": 128, 00:20:28.670 "io_size": 4096, 00:20:28.670 "runtime": 1.014979, 00:20:28.670 "iops": 4815.863185346692, 00:20:28.670 "mibps": 18.811965567760517, 00:20:28.670 "io_failed": 0, 00:20:28.670 "io_timeout": 0, 00:20:28.670 "avg_latency_us": 26409.820316421166, 00:20:28.670 "min_latency_us": 4587.52, 00:20:28.670 "max_latency_us": 79517.01333333334 00:20:28.670 } 00:20:28.670 ], 00:20:28.670 "core_count": 1 00:20:28.670 } 00:20:28.670 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 3017591 00:20:28.670 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3017591 ']' 00:20:28.670 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3017591 00:20:28.670 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3017591 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3017591' 00:20:28.932 killing process with pid 3017591 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3017591 00:20:28.932 Received shutdown signal, test time was about 1.000000 seconds 00:20:28.932 00:20:28.932 Latency(us) 00:20:28.932 [2024-11-05T03:31:42.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:28.932 [2024-11-05T03:31:42.572Z] =================================================================================================================== 00:20:28.932 [2024-11-05T03:31:42.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3017591 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 3017089 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3017089 ']' 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3017089 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3017089 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3017089' 00:20:28.932 killing process with pid 3017089 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3017089 00:20:28.932 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3017089 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3017994 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3017994 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3017994 ']' 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:29.193 04:31:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:29.193 [2024-11-05 04:31:42.721693] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:29.193 [2024-11-05 04:31:42.721762] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.193 [2024-11-05 04:31:42.797155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:29.193 [2024-11-05 04:31:42.831652] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.193 [2024-11-05 04:31:42.831685] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.193 [2024-11-05 04:31:42.831693] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.193 [2024-11-05 04:31:42.831701] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.193 [2024-11-05 04:31:42.831707] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.454 [2024-11-05 04:31:42.832285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.025 [2024-11-05 04:31:43.568352] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:30.025 malloc0 00:20:30.025 [2024-11-05 04:31:43.595034] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:30.025 [2024-11-05 04:31:43.595263] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=3018305 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 3018305 /var/tmp/bdevperf.sock 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3018305 ']' 00:20:30.025 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.026 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:30.026 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.026 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:30.026 04:31:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:30.286 [2024-11-05 04:31:43.673497] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:30.286 [2024-11-05 04:31:43.673545] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3018305 ] 00:20:30.286 [2024-11-05 04:31:43.757055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.286 [2024-11-05 04:31:43.786697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:30.858 04:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:30.858 04:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:30.858 04:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.qzI2yU5L6n 00:20:31.119 04:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:31.380 [2024-11-05 04:31:44.766194] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:31.380 nvme0n1 00:20:31.380 04:31:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.380 Running I/O for 1 seconds... 00:20:32.583 5897.00 IOPS, 23.04 MiB/s 00:20:32.583 Latency(us) 00:20:32.583 [2024-11-05T03:31:46.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.583 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:32.583 Verification LBA range: start 0x0 length 0x2000 00:20:32.583 nvme0n1 : 1.02 5932.15 23.17 0.00 0.00 21404.83 5789.01 35170.99 00:20:32.583 [2024-11-05T03:31:46.223Z] =================================================================================================================== 00:20:32.583 [2024-11-05T03:31:46.223Z] Total : 5932.15 23.17 0.00 0.00 21404.83 5789.01 35170.99 00:20:32.583 { 00:20:32.583 "results": [ 00:20:32.583 { 00:20:32.584 "job": "nvme0n1", 00:20:32.584 "core_mask": "0x2", 00:20:32.584 "workload": "verify", 00:20:32.584 "status": "finished", 00:20:32.584 "verify_range": { 00:20:32.584 "start": 0, 00:20:32.584 "length": 8192 00:20:32.584 }, 00:20:32.584 "queue_depth": 128, 00:20:32.584 "io_size": 4096, 00:20:32.584 "runtime": 1.015652, 00:20:32.584 "iops": 5932.149988381847, 00:20:32.584 "mibps": 23.17246089211659, 00:20:32.584 "io_failed": 0, 00:20:32.584 "io_timeout": 0, 00:20:32.584 "avg_latency_us": 21404.82807856155, 00:20:32.584 "min_latency_us": 5789.013333333333, 00:20:32.584 "max_latency_us": 35170.986666666664 00:20:32.584 } 00:20:32.584 ], 00:20:32.584 "core_count": 1 00:20:32.584 } 00:20:32.584 04:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:20:32.584 04:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.584 04:31:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:32.584 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.584 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:20:32.584 "subsystems": [ 00:20:32.584 { 00:20:32.584 "subsystem": "keyring", 00:20:32.584 "config": [ 00:20:32.584 { 00:20:32.584 "method": "keyring_file_add_key", 00:20:32.584 "params": { 00:20:32.584 "name": "key0", 00:20:32.584 "path": "/tmp/tmp.qzI2yU5L6n" 00:20:32.584 } 00:20:32.584 } 00:20:32.584 ] 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "subsystem": "iobuf", 00:20:32.584 "config": [ 00:20:32.584 { 00:20:32.584 "method": "iobuf_set_options", 00:20:32.584 "params": { 00:20:32.584 "small_pool_count": 8192, 00:20:32.584 "large_pool_count": 1024, 00:20:32.584 "small_bufsize": 8192, 00:20:32.584 "large_bufsize": 135168, 00:20:32.584 "enable_numa": false 00:20:32.584 } 00:20:32.584 } 00:20:32.584 ] 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "subsystem": "sock", 00:20:32.584 "config": [ 00:20:32.584 { 00:20:32.584 "method": "sock_set_default_impl", 00:20:32.584 "params": { 00:20:32.584 "impl_name": "posix" 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "sock_impl_set_options", 00:20:32.584 "params": { 00:20:32.584 "impl_name": "ssl", 00:20:32.584 "recv_buf_size": 4096, 00:20:32.584 "send_buf_size": 4096, 00:20:32.584 "enable_recv_pipe": true, 00:20:32.584 "enable_quickack": false, 00:20:32.584 "enable_placement_id": 0, 00:20:32.584 "enable_zerocopy_send_server": true, 00:20:32.584 "enable_zerocopy_send_client": false, 00:20:32.584 "zerocopy_threshold": 0, 00:20:32.584 "tls_version": 0, 00:20:32.584 "enable_ktls": false 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "sock_impl_set_options", 00:20:32.584 "params": { 00:20:32.584 "impl_name": "posix", 00:20:32.584 "recv_buf_size": 2097152, 00:20:32.584 "send_buf_size": 2097152, 00:20:32.584 "enable_recv_pipe": true, 00:20:32.584 "enable_quickack": false, 00:20:32.584 "enable_placement_id": 0, 00:20:32.584 "enable_zerocopy_send_server": true, 00:20:32.584 "enable_zerocopy_send_client": false, 00:20:32.584 "zerocopy_threshold": 0, 00:20:32.584 "tls_version": 0, 00:20:32.584 "enable_ktls": false 00:20:32.584 } 00:20:32.584 } 00:20:32.584 ] 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "subsystem": "vmd", 00:20:32.584 "config": [] 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "subsystem": "accel", 00:20:32.584 "config": [ 00:20:32.584 { 00:20:32.584 "method": "accel_set_options", 00:20:32.584 "params": { 00:20:32.584 "small_cache_size": 128, 00:20:32.584 "large_cache_size": 16, 00:20:32.584 "task_count": 2048, 00:20:32.584 "sequence_count": 2048, 00:20:32.584 "buf_count": 2048 00:20:32.584 } 00:20:32.584 } 00:20:32.584 ] 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "subsystem": "bdev", 00:20:32.584 "config": [ 00:20:32.584 { 00:20:32.584 "method": "bdev_set_options", 00:20:32.584 "params": { 00:20:32.584 "bdev_io_pool_size": 65535, 00:20:32.584 "bdev_io_cache_size": 256, 00:20:32.584 "bdev_auto_examine": true, 00:20:32.584 "iobuf_small_cache_size": 128, 00:20:32.584 "iobuf_large_cache_size": 16 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "bdev_raid_set_options", 00:20:32.584 "params": { 00:20:32.584 "process_window_size_kb": 1024, 00:20:32.584 "process_max_bandwidth_mb_sec": 0 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "bdev_iscsi_set_options", 00:20:32.584 "params": { 00:20:32.584 "timeout_sec": 30 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "bdev_nvme_set_options", 00:20:32.584 "params": { 00:20:32.584 "action_on_timeout": "none", 00:20:32.584 "timeout_us": 0, 00:20:32.584 "timeout_admin_us": 0, 00:20:32.584 "keep_alive_timeout_ms": 10000, 00:20:32.584 "arbitration_burst": 0, 00:20:32.584 "low_priority_weight": 0, 00:20:32.584 "medium_priority_weight": 0, 00:20:32.584 "high_priority_weight": 0, 00:20:32.584 "nvme_adminq_poll_period_us": 10000, 00:20:32.584 "nvme_ioq_poll_period_us": 0, 00:20:32.584 "io_queue_requests": 0, 00:20:32.584 "delay_cmd_submit": true, 00:20:32.584 "transport_retry_count": 4, 00:20:32.584 "bdev_retry_count": 3, 00:20:32.584 "transport_ack_timeout": 0, 00:20:32.584 "ctrlr_loss_timeout_sec": 0, 00:20:32.584 "reconnect_delay_sec": 0, 00:20:32.584 "fast_io_fail_timeout_sec": 0, 00:20:32.584 "disable_auto_failback": false, 00:20:32.584 "generate_uuids": false, 00:20:32.584 "transport_tos": 0, 00:20:32.584 "nvme_error_stat": false, 00:20:32.584 "rdma_srq_size": 0, 00:20:32.584 "io_path_stat": false, 00:20:32.584 "allow_accel_sequence": false, 00:20:32.584 "rdma_max_cq_size": 0, 00:20:32.584 "rdma_cm_event_timeout_ms": 0, 00:20:32.584 "dhchap_digests": [ 00:20:32.584 "sha256", 00:20:32.584 "sha384", 00:20:32.584 "sha512" 00:20:32.584 ], 00:20:32.584 "dhchap_dhgroups": [ 00:20:32.584 "null", 00:20:32.584 "ffdhe2048", 00:20:32.584 "ffdhe3072", 00:20:32.584 "ffdhe4096", 00:20:32.584 "ffdhe6144", 00:20:32.584 "ffdhe8192" 00:20:32.584 ] 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "bdev_nvme_set_hotplug", 00:20:32.584 "params": { 00:20:32.584 "period_us": 100000, 00:20:32.584 "enable": false 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "bdev_malloc_create", 00:20:32.584 "params": { 00:20:32.584 "name": "malloc0", 00:20:32.584 "num_blocks": 8192, 00:20:32.584 "block_size": 4096, 00:20:32.584 "physical_block_size": 4096, 00:20:32.584 "uuid": "1b7f6adf-e720-4f51-a27a-d34f1b69f7fc", 00:20:32.584 "optimal_io_boundary": 0, 00:20:32.584 "md_size": 0, 00:20:32.584 "dif_type": 0, 00:20:32.584 "dif_is_head_of_md": false, 00:20:32.584 "dif_pi_format": 0 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "bdev_wait_for_examine" 00:20:32.584 } 00:20:32.584 ] 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "subsystem": "nbd", 00:20:32.584 "config": [] 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "subsystem": "scheduler", 00:20:32.584 "config": [ 00:20:32.584 { 00:20:32.584 "method": "framework_set_scheduler", 00:20:32.584 "params": { 00:20:32.584 "name": "static" 00:20:32.584 } 00:20:32.584 } 00:20:32.584 ] 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "subsystem": "nvmf", 00:20:32.584 "config": [ 00:20:32.584 { 00:20:32.584 "method": "nvmf_set_config", 00:20:32.584 "params": { 00:20:32.584 "discovery_filter": "match_any", 00:20:32.584 "admin_cmd_passthru": { 00:20:32.584 "identify_ctrlr": false 00:20:32.584 }, 00:20:32.584 "dhchap_digests": [ 00:20:32.584 "sha256", 00:20:32.584 "sha384", 00:20:32.584 "sha512" 00:20:32.584 ], 00:20:32.584 "dhchap_dhgroups": [ 00:20:32.584 "null", 00:20:32.584 "ffdhe2048", 00:20:32.584 "ffdhe3072", 00:20:32.584 "ffdhe4096", 00:20:32.584 "ffdhe6144", 00:20:32.584 "ffdhe8192" 00:20:32.584 ] 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "nvmf_set_max_subsystems", 00:20:32.584 "params": { 00:20:32.584 "max_subsystems": 1024 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "nvmf_set_crdt", 00:20:32.584 "params": { 00:20:32.584 "crdt1": 0, 00:20:32.584 "crdt2": 0, 00:20:32.584 "crdt3": 0 00:20:32.584 } 00:20:32.584 }, 00:20:32.584 { 00:20:32.584 "method": "nvmf_create_transport", 00:20:32.584 "params": { 00:20:32.584 "trtype": "TCP", 00:20:32.584 "max_queue_depth": 128, 00:20:32.584 "max_io_qpairs_per_ctrlr": 127, 00:20:32.584 "in_capsule_data_size": 4096, 00:20:32.584 "max_io_size": 131072, 00:20:32.584 "io_unit_size": 131072, 00:20:32.584 "max_aq_depth": 128, 00:20:32.584 "num_shared_buffers": 511, 00:20:32.584 "buf_cache_size": 4294967295, 00:20:32.584 "dif_insert_or_strip": false, 00:20:32.584 "zcopy": false, 00:20:32.584 "c2h_success": false, 00:20:32.585 "sock_priority": 0, 00:20:32.585 "abort_timeout_sec": 1, 00:20:32.585 "ack_timeout": 0, 00:20:32.585 "data_wr_pool_size": 0 00:20:32.585 } 00:20:32.585 }, 00:20:32.585 { 00:20:32.585 "method": "nvmf_create_subsystem", 00:20:32.585 "params": { 00:20:32.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.585 "allow_any_host": false, 00:20:32.585 "serial_number": "00000000000000000000", 00:20:32.585 "model_number": "SPDK bdev Controller", 00:20:32.585 "max_namespaces": 32, 00:20:32.585 "min_cntlid": 1, 00:20:32.585 "max_cntlid": 65519, 00:20:32.585 "ana_reporting": false 00:20:32.585 } 00:20:32.585 }, 00:20:32.585 { 00:20:32.585 "method": "nvmf_subsystem_add_host", 00:20:32.585 "params": { 00:20:32.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.585 "host": "nqn.2016-06.io.spdk:host1", 00:20:32.585 "psk": "key0" 00:20:32.585 } 00:20:32.585 }, 00:20:32.585 { 00:20:32.585 "method": "nvmf_subsystem_add_ns", 00:20:32.585 "params": { 00:20:32.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.585 "namespace": { 00:20:32.585 "nsid": 1, 00:20:32.585 "bdev_name": "malloc0", 00:20:32.585 "nguid": "1B7F6ADFE7204F51A27AD34F1B69F7FC", 00:20:32.585 "uuid": "1b7f6adf-e720-4f51-a27a-d34f1b69f7fc", 00:20:32.585 "no_auto_visible": false 00:20:32.585 } 00:20:32.585 } 00:20:32.585 }, 00:20:32.585 { 00:20:32.585 "method": "nvmf_subsystem_add_listener", 00:20:32.585 "params": { 00:20:32.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.585 "listen_address": { 00:20:32.585 "trtype": "TCP", 00:20:32.585 "adrfam": "IPv4", 00:20:32.585 "traddr": "10.0.0.2", 00:20:32.585 "trsvcid": "4420" 00:20:32.585 }, 00:20:32.585 "secure_channel": false, 00:20:32.585 "sock_impl": "ssl" 00:20:32.585 } 00:20:32.585 } 00:20:32.585 ] 00:20:32.585 } 00:20:32.585 ] 00:20:32.585 }' 00:20:32.585 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:20:32.847 "subsystems": [ 00:20:32.847 { 00:20:32.847 "subsystem": "keyring", 00:20:32.847 "config": [ 00:20:32.847 { 00:20:32.847 "method": "keyring_file_add_key", 00:20:32.847 "params": { 00:20:32.847 "name": "key0", 00:20:32.847 "path": "/tmp/tmp.qzI2yU5L6n" 00:20:32.847 } 00:20:32.847 } 00:20:32.847 ] 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "subsystem": "iobuf", 00:20:32.847 "config": [ 00:20:32.847 { 00:20:32.847 "method": "iobuf_set_options", 00:20:32.847 "params": { 00:20:32.847 "small_pool_count": 8192, 00:20:32.847 "large_pool_count": 1024, 00:20:32.847 "small_bufsize": 8192, 00:20:32.847 "large_bufsize": 135168, 00:20:32.847 "enable_numa": false 00:20:32.847 } 00:20:32.847 } 00:20:32.847 ] 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "subsystem": "sock", 00:20:32.847 "config": [ 00:20:32.847 { 00:20:32.847 "method": "sock_set_default_impl", 00:20:32.847 "params": { 00:20:32.847 "impl_name": "posix" 00:20:32.847 } 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "method": "sock_impl_set_options", 00:20:32.847 "params": { 00:20:32.847 "impl_name": "ssl", 00:20:32.847 "recv_buf_size": 4096, 00:20:32.847 "send_buf_size": 4096, 00:20:32.847 "enable_recv_pipe": true, 00:20:32.847 "enable_quickack": false, 00:20:32.847 "enable_placement_id": 0, 00:20:32.847 "enable_zerocopy_send_server": true, 00:20:32.847 "enable_zerocopy_send_client": false, 00:20:32.847 "zerocopy_threshold": 0, 00:20:32.847 "tls_version": 0, 00:20:32.847 "enable_ktls": false 00:20:32.847 } 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "method": "sock_impl_set_options", 00:20:32.847 "params": { 00:20:32.847 "impl_name": "posix", 00:20:32.847 "recv_buf_size": 2097152, 00:20:32.847 "send_buf_size": 2097152, 00:20:32.847 "enable_recv_pipe": true, 00:20:32.847 "enable_quickack": false, 00:20:32.847 "enable_placement_id": 0, 00:20:32.847 "enable_zerocopy_send_server": true, 00:20:32.847 "enable_zerocopy_send_client": false, 00:20:32.847 "zerocopy_threshold": 0, 00:20:32.847 "tls_version": 0, 00:20:32.847 "enable_ktls": false 00:20:32.847 } 00:20:32.847 } 00:20:32.847 ] 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "subsystem": "vmd", 00:20:32.847 "config": [] 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "subsystem": "accel", 00:20:32.847 "config": [ 00:20:32.847 { 00:20:32.847 "method": "accel_set_options", 00:20:32.847 "params": { 00:20:32.847 "small_cache_size": 128, 00:20:32.847 "large_cache_size": 16, 00:20:32.847 "task_count": 2048, 00:20:32.847 "sequence_count": 2048, 00:20:32.847 "buf_count": 2048 00:20:32.847 } 00:20:32.847 } 00:20:32.847 ] 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "subsystem": "bdev", 00:20:32.847 "config": [ 00:20:32.847 { 00:20:32.847 "method": "bdev_set_options", 00:20:32.847 "params": { 00:20:32.847 "bdev_io_pool_size": 65535, 00:20:32.847 "bdev_io_cache_size": 256, 00:20:32.847 "bdev_auto_examine": true, 00:20:32.847 "iobuf_small_cache_size": 128, 00:20:32.847 "iobuf_large_cache_size": 16 00:20:32.847 } 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "method": "bdev_raid_set_options", 00:20:32.847 "params": { 00:20:32.847 "process_window_size_kb": 1024, 00:20:32.847 "process_max_bandwidth_mb_sec": 0 00:20:32.847 } 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "method": "bdev_iscsi_set_options", 00:20:32.847 "params": { 00:20:32.847 "timeout_sec": 30 00:20:32.847 } 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "method": "bdev_nvme_set_options", 00:20:32.847 "params": { 00:20:32.847 "action_on_timeout": "none", 00:20:32.847 "timeout_us": 0, 00:20:32.847 "timeout_admin_us": 0, 00:20:32.847 "keep_alive_timeout_ms": 10000, 00:20:32.847 "arbitration_burst": 0, 00:20:32.847 "low_priority_weight": 0, 00:20:32.847 "medium_priority_weight": 0, 00:20:32.847 "high_priority_weight": 0, 00:20:32.847 "nvme_adminq_poll_period_us": 10000, 00:20:32.847 "nvme_ioq_poll_period_us": 0, 00:20:32.847 "io_queue_requests": 512, 00:20:32.847 "delay_cmd_submit": true, 00:20:32.847 "transport_retry_count": 4, 00:20:32.847 "bdev_retry_count": 3, 00:20:32.847 "transport_ack_timeout": 0, 00:20:32.847 "ctrlr_loss_timeout_sec": 0, 00:20:32.847 "reconnect_delay_sec": 0, 00:20:32.847 "fast_io_fail_timeout_sec": 0, 00:20:32.847 "disable_auto_failback": false, 00:20:32.847 "generate_uuids": false, 00:20:32.847 "transport_tos": 0, 00:20:32.847 "nvme_error_stat": false, 00:20:32.847 "rdma_srq_size": 0, 00:20:32.847 "io_path_stat": false, 00:20:32.847 "allow_accel_sequence": false, 00:20:32.847 "rdma_max_cq_size": 0, 00:20:32.847 "rdma_cm_event_timeout_ms": 0, 00:20:32.847 "dhchap_digests": [ 00:20:32.847 "sha256", 00:20:32.847 "sha384", 00:20:32.847 "sha512" 00:20:32.847 ], 00:20:32.847 "dhchap_dhgroups": [ 00:20:32.847 "null", 00:20:32.847 "ffdhe2048", 00:20:32.847 "ffdhe3072", 00:20:32.847 "ffdhe4096", 00:20:32.847 "ffdhe6144", 00:20:32.847 "ffdhe8192" 00:20:32.847 ] 00:20:32.847 } 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "method": "bdev_nvme_attach_controller", 00:20:32.847 "params": { 00:20:32.847 "name": "nvme0", 00:20:32.847 "trtype": "TCP", 00:20:32.847 "adrfam": "IPv4", 00:20:32.847 "traddr": "10.0.0.2", 00:20:32.847 "trsvcid": "4420", 00:20:32.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:32.847 "prchk_reftag": false, 00:20:32.847 "prchk_guard": false, 00:20:32.847 "ctrlr_loss_timeout_sec": 0, 00:20:32.847 "reconnect_delay_sec": 0, 00:20:32.847 "fast_io_fail_timeout_sec": 0, 00:20:32.847 "psk": "key0", 00:20:32.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:32.847 "hdgst": false, 00:20:32.847 "ddgst": false, 00:20:32.847 "multipath": "multipath" 00:20:32.847 } 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "method": "bdev_nvme_set_hotplug", 00:20:32.847 "params": { 00:20:32.847 "period_us": 100000, 00:20:32.847 "enable": false 00:20:32.847 } 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "method": "bdev_enable_histogram", 00:20:32.847 "params": { 00:20:32.847 "name": "nvme0n1", 00:20:32.847 "enable": true 00:20:32.847 } 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "method": "bdev_wait_for_examine" 00:20:32.847 } 00:20:32.847 ] 00:20:32.847 }, 00:20:32.847 { 00:20:32.847 "subsystem": "nbd", 00:20:32.847 "config": [] 00:20:32.847 } 00:20:32.847 ] 00:20:32.847 }' 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 3018305 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3018305 ']' 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3018305 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3018305 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3018305' 00:20:32.847 killing process with pid 3018305 00:20:32.847 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3018305 00:20:32.848 Received shutdown signal, test time was about 1.000000 seconds 00:20:32.848 00:20:32.848 Latency(us) 00:20:32.848 [2024-11-05T03:31:46.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:32.848 [2024-11-05T03:31:46.488Z] =================================================================================================================== 00:20:32.848 [2024-11-05T03:31:46.488Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:32.848 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3018305 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 3017994 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3017994 ']' 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3017994 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3017994 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3017994' 00:20:33.109 killing process with pid 3017994 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3017994 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3017994 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:33.109 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:20:33.109 "subsystems": [ 00:20:33.109 { 00:20:33.109 "subsystem": "keyring", 00:20:33.109 "config": [ 00:20:33.109 { 00:20:33.109 "method": "keyring_file_add_key", 00:20:33.109 "params": { 00:20:33.109 "name": "key0", 00:20:33.109 "path": "/tmp/tmp.qzI2yU5L6n" 00:20:33.109 } 00:20:33.109 } 00:20:33.109 ] 00:20:33.109 }, 00:20:33.109 { 00:20:33.109 "subsystem": "iobuf", 00:20:33.109 "config": [ 00:20:33.109 { 00:20:33.109 "method": "iobuf_set_options", 00:20:33.109 "params": { 00:20:33.109 "small_pool_count": 8192, 00:20:33.109 "large_pool_count": 1024, 00:20:33.109 "small_bufsize": 8192, 00:20:33.109 "large_bufsize": 135168, 00:20:33.109 "enable_numa": false 00:20:33.109 } 00:20:33.109 } 00:20:33.109 ] 00:20:33.109 }, 00:20:33.109 { 00:20:33.109 "subsystem": "sock", 00:20:33.109 "config": [ 00:20:33.109 { 00:20:33.109 "method": "sock_set_default_impl", 00:20:33.109 "params": { 00:20:33.109 "impl_name": "posix" 00:20:33.109 } 00:20:33.109 }, 00:20:33.109 { 00:20:33.109 "method": "sock_impl_set_options", 00:20:33.109 "params": { 00:20:33.109 "impl_name": "ssl", 00:20:33.109 "recv_buf_size": 4096, 00:20:33.109 "send_buf_size": 4096, 00:20:33.110 "enable_recv_pipe": true, 00:20:33.110 "enable_quickack": false, 00:20:33.110 "enable_placement_id": 0, 00:20:33.110 "enable_zerocopy_send_server": true, 00:20:33.110 "enable_zerocopy_send_client": false, 00:20:33.110 "zerocopy_threshold": 0, 00:20:33.110 "tls_version": 0, 00:20:33.110 "enable_ktls": false 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "sock_impl_set_options", 00:20:33.110 "params": { 00:20:33.110 "impl_name": "posix", 00:20:33.110 "recv_buf_size": 2097152, 00:20:33.110 "send_buf_size": 2097152, 00:20:33.110 "enable_recv_pipe": true, 00:20:33.110 "enable_quickack": false, 00:20:33.110 "enable_placement_id": 0, 00:20:33.110 "enable_zerocopy_send_server": true, 00:20:33.110 "enable_zerocopy_send_client": false, 00:20:33.110 "zerocopy_threshold": 0, 00:20:33.110 "tls_version": 0, 00:20:33.110 "enable_ktls": false 00:20:33.110 } 00:20:33.110 } 00:20:33.110 ] 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "subsystem": "vmd", 00:20:33.110 "config": [] 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "subsystem": "accel", 00:20:33.110 "config": [ 00:20:33.110 { 00:20:33.110 "method": "accel_set_options", 00:20:33.110 "params": { 00:20:33.110 "small_cache_size": 128, 00:20:33.110 "large_cache_size": 16, 00:20:33.110 "task_count": 2048, 00:20:33.110 "sequence_count": 2048, 00:20:33.110 "buf_count": 2048 00:20:33.110 } 00:20:33.110 } 00:20:33.110 ] 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "subsystem": "bdev", 00:20:33.110 "config": [ 00:20:33.110 { 00:20:33.110 "method": "bdev_set_options", 00:20:33.110 "params": { 00:20:33.110 "bdev_io_pool_size": 65535, 00:20:33.110 "bdev_io_cache_size": 256, 00:20:33.110 "bdev_auto_examine": true, 00:20:33.110 "iobuf_small_cache_size": 128, 00:20:33.110 "iobuf_large_cache_size": 16 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "bdev_raid_set_options", 00:20:33.110 "params": { 00:20:33.110 "process_window_size_kb": 1024, 00:20:33.110 "process_max_bandwidth_mb_sec": 0 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "bdev_iscsi_set_options", 00:20:33.110 "params": { 00:20:33.110 "timeout_sec": 30 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "bdev_nvme_set_options", 00:20:33.110 "params": { 00:20:33.110 "action_on_timeout": "none", 00:20:33.110 "timeout_us": 0, 00:20:33.110 "timeout_admin_us": 0, 00:20:33.110 "keep_alive_timeout_ms": 10000, 00:20:33.110 "arbitration_burst": 0, 00:20:33.110 "low_priority_weight": 0, 00:20:33.110 "medium_priority_weight": 0, 00:20:33.110 "high_priority_weight": 0, 00:20:33.110 "nvme_adminq_poll_period_us": 10000, 00:20:33.110 "nvme_ioq_poll_period_us": 0, 00:20:33.110 "io_queue_requests": 0, 00:20:33.110 "delay_cmd_submit": true, 00:20:33.110 "transport_retry_count": 4, 00:20:33.110 "bdev_retry_count": 3, 00:20:33.110 "transport_ack_timeout": 0, 00:20:33.110 "ctrlr_loss_timeout_sec": 0, 00:20:33.110 "reconnect_delay_sec": 0, 00:20:33.110 "fast_io_fail_timeout_sec": 0, 00:20:33.110 "disable_auto_failback": false, 00:20:33.110 "generate_uuids": false, 00:20:33.110 "transport_tos": 0, 00:20:33.110 "nvme_error_stat": false, 00:20:33.110 "rdma_srq_size": 0, 00:20:33.110 "io_path_stat": false, 00:20:33.110 "allow_accel_sequence": false, 00:20:33.110 "rdma_max_cq_size": 0, 00:20:33.110 "rdma_cm_event_timeout_ms": 0, 00:20:33.110 "dhchap_digests": [ 00:20:33.110 "sha256", 00:20:33.110 "sha384", 00:20:33.110 "sha512" 00:20:33.110 ], 00:20:33.110 "dhchap_dhgroups": [ 00:20:33.110 "null", 00:20:33.110 "ffdhe2048", 00:20:33.110 "ffdhe3072", 00:20:33.110 "ffdhe4096", 00:20:33.110 "ffdhe6144", 00:20:33.110 "ffdhe8192" 00:20:33.110 ] 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "bdev_nvme_set_hotplug", 00:20:33.110 "params": { 00:20:33.110 "period_us": 100000, 00:20:33.110 "enable": false 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "bdev_malloc_create", 00:20:33.110 "params": { 00:20:33.110 "name": "malloc0", 00:20:33.110 "num_blocks": 8192, 00:20:33.110 "block_size": 4096, 00:20:33.110 "physical_block_size": 4096, 00:20:33.110 "uuid": "1b7f6adf-e720-4f51-a27a-d34f1b69f7fc", 00:20:33.110 "optimal_io_boundary": 0, 00:20:33.110 "md_size": 0, 00:20:33.110 "dif_type": 0, 00:20:33.110 "dif_is_head_of_md": false, 00:20:33.110 "dif_pi_format": 0 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "bdev_wait_for_examine" 00:20:33.110 } 00:20:33.110 ] 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "subsystem": "nbd", 00:20:33.110 "config": [] 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "subsystem": "scheduler", 00:20:33.110 "config": [ 00:20:33.110 { 00:20:33.110 "method": "framework_set_scheduler", 00:20:33.110 "params": { 00:20:33.110 "name": "static" 00:20:33.110 } 00:20:33.110 } 00:20:33.110 ] 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "subsystem": "nvmf", 00:20:33.110 "config": [ 00:20:33.110 { 00:20:33.110 "method": "nvmf_set_config", 00:20:33.110 "params": { 00:20:33.110 "discovery_filter": "match_any", 00:20:33.110 "admin_cmd_passthru": { 00:20:33.110 "identify_ctrlr": false 00:20:33.110 }, 00:20:33.110 "dhchap_digests": [ 00:20:33.110 "sha256", 00:20:33.110 "sha384", 00:20:33.110 "sha512" 00:20:33.110 ], 00:20:33.110 "dhchap_dhgroups": [ 00:20:33.110 "null", 00:20:33.110 "ffdhe2048", 00:20:33.110 "ffdhe3072", 00:20:33.110 "ffdhe4096", 00:20:33.110 "ffdhe6144", 00:20:33.110 "ffdhe8192" 00:20:33.110 ] 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "nvmf_set_max_subsystems", 00:20:33.110 "params": { 00:20:33.110 "max_subsystems": 1024 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "nvmf_set_crdt", 00:20:33.110 "params": { 00:20:33.110 "crdt1": 0, 00:20:33.110 "crdt2": 0, 00:20:33.110 "crdt3": 0 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "nvmf_create_transport", 00:20:33.110 "params": { 00:20:33.110 "trtype": "TCP", 00:20:33.110 "max_queue_depth": 128, 00:20:33.110 "max_io_qpairs_per_ctrlr": 127, 00:20:33.110 "in_capsule_data_size": 4096, 00:20:33.110 "max_io_size": 131072, 00:20:33.110 "io_unit_size": 131072, 00:20:33.110 "max_aq_depth": 128, 00:20:33.110 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.110 "num_shared_buffers": 511, 00:20:33.110 "buf_cache_size": 4294967295, 00:20:33.110 "dif_insert_or_strip": false, 00:20:33.110 "zcopy": false, 00:20:33.110 "c2h_success": false, 00:20:33.110 "sock_priority": 0, 00:20:33.110 "abort_timeout_sec": 1, 00:20:33.110 "ack_timeout": 0, 00:20:33.110 "data_wr_pool_size": 0 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "nvmf_create_subsystem", 00:20:33.110 "params": { 00:20:33.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.110 "allow_any_host": false, 00:20:33.110 "serial_number": "00000000000000000000", 00:20:33.110 "model_number": "SPDK bdev Controller", 00:20:33.110 "max_namespaces": 32, 00:20:33.110 "min_cntlid": 1, 00:20:33.110 "max_cntlid": 65519, 00:20:33.110 "ana_reporting": false 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "nvmf_subsystem_add_host", 00:20:33.110 "params": { 00:20:33.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.110 "host": "nqn.2016-06.io.spdk:host1", 00:20:33.110 "psk": "key0" 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "nvmf_subsystem_add_ns", 00:20:33.110 "params": { 00:20:33.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.110 "namespace": { 00:20:33.110 "nsid": 1, 00:20:33.110 "bdev_name": "malloc0", 00:20:33.110 "nguid": "1B7F6ADFE7204F51A27AD34F1B69F7FC", 00:20:33.110 "uuid": "1b7f6adf-e720-4f51-a27a-d34f1b69f7fc", 00:20:33.110 "no_auto_visible": false 00:20:33.110 } 00:20:33.110 } 00:20:33.110 }, 00:20:33.110 { 00:20:33.110 "method": "nvmf_subsystem_add_listener", 00:20:33.110 "params": { 00:20:33.110 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.110 "listen_address": { 00:20:33.110 "trtype": "TCP", 00:20:33.110 "adrfam": "IPv4", 00:20:33.110 "traddr": "10.0.0.2", 00:20:33.110 "trsvcid": "4420" 00:20:33.110 }, 00:20:33.110 "secure_channel": false, 00:20:33.110 "sock_impl": "ssl" 00:20:33.110 } 00:20:33.110 } 00:20:33.110 ] 00:20:33.110 } 00:20:33.111 ] 00:20:33.111 }' 00:20:33.111 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=3018987 00:20:33.111 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 3018987 00:20:33.111 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:33.111 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3018987 ']' 00:20:33.111 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:33.111 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:33.111 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:33.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:33.111 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:33.111 04:31:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:33.371 [2024-11-05 04:31:46.787235] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:33.371 [2024-11-05 04:31:46.787322] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:33.371 [2024-11-05 04:31:46.866710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.371 [2024-11-05 04:31:46.901837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.371 [2024-11-05 04:31:46.901869] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.371 [2024-11-05 04:31:46.901878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.371 [2024-11-05 04:31:46.901884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.371 [2024-11-05 04:31:46.901890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.371 [2024-11-05 04:31:46.902471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.632 [2024-11-05 04:31:47.101339] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:33.632 [2024-11-05 04:31:47.133360] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:33.632 [2024-11-05 04:31:47.133591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=3019016 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 3019016 /var/tmp/bdevperf.sock 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 3019016 ']' 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:34.204 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:34.205 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:34.205 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:34.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:34.205 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:34.205 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:34.205 04:31:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:20:34.205 "subsystems": [ 00:20:34.205 { 00:20:34.205 "subsystem": "keyring", 00:20:34.205 "config": [ 00:20:34.205 { 00:20:34.205 "method": "keyring_file_add_key", 00:20:34.205 "params": { 00:20:34.205 "name": "key0", 00:20:34.205 "path": "/tmp/tmp.qzI2yU5L6n" 00:20:34.205 } 00:20:34.205 } 00:20:34.205 ] 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "subsystem": "iobuf", 00:20:34.205 "config": [ 00:20:34.205 { 00:20:34.205 "method": "iobuf_set_options", 00:20:34.205 "params": { 00:20:34.205 "small_pool_count": 8192, 00:20:34.205 "large_pool_count": 1024, 00:20:34.205 "small_bufsize": 8192, 00:20:34.205 "large_bufsize": 135168, 00:20:34.205 "enable_numa": false 00:20:34.205 } 00:20:34.205 } 00:20:34.205 ] 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "subsystem": "sock", 00:20:34.205 "config": [ 00:20:34.205 { 00:20:34.205 "method": "sock_set_default_impl", 00:20:34.205 "params": { 00:20:34.205 "impl_name": "posix" 00:20:34.205 } 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "method": "sock_impl_set_options", 00:20:34.205 "params": { 00:20:34.205 "impl_name": "ssl", 00:20:34.205 "recv_buf_size": 4096, 00:20:34.205 "send_buf_size": 4096, 00:20:34.205 "enable_recv_pipe": true, 00:20:34.205 "enable_quickack": false, 00:20:34.205 "enable_placement_id": 0, 00:20:34.205 "enable_zerocopy_send_server": true, 00:20:34.205 "enable_zerocopy_send_client": false, 00:20:34.205 "zerocopy_threshold": 0, 00:20:34.205 "tls_version": 0, 00:20:34.205 "enable_ktls": false 00:20:34.205 } 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "method": "sock_impl_set_options", 00:20:34.205 "params": { 00:20:34.205 "impl_name": "posix", 00:20:34.205 "recv_buf_size": 2097152, 00:20:34.205 "send_buf_size": 2097152, 00:20:34.205 "enable_recv_pipe": true, 00:20:34.205 "enable_quickack": false, 00:20:34.205 "enable_placement_id": 0, 00:20:34.205 "enable_zerocopy_send_server": true, 00:20:34.205 "enable_zerocopy_send_client": false, 00:20:34.205 "zerocopy_threshold": 0, 00:20:34.205 "tls_version": 0, 00:20:34.205 "enable_ktls": false 00:20:34.205 } 00:20:34.205 } 00:20:34.205 ] 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "subsystem": "vmd", 00:20:34.205 "config": [] 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "subsystem": "accel", 00:20:34.205 "config": [ 00:20:34.205 { 00:20:34.205 "method": "accel_set_options", 00:20:34.205 "params": { 00:20:34.205 "small_cache_size": 128, 00:20:34.205 "large_cache_size": 16, 00:20:34.205 "task_count": 2048, 00:20:34.205 "sequence_count": 2048, 00:20:34.205 "buf_count": 2048 00:20:34.205 } 00:20:34.205 } 00:20:34.205 ] 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "subsystem": "bdev", 00:20:34.205 "config": [ 00:20:34.205 { 00:20:34.205 "method": "bdev_set_options", 00:20:34.205 "params": { 00:20:34.205 "bdev_io_pool_size": 65535, 00:20:34.205 "bdev_io_cache_size": 256, 00:20:34.205 "bdev_auto_examine": true, 00:20:34.205 "iobuf_small_cache_size": 128, 00:20:34.205 "iobuf_large_cache_size": 16 00:20:34.205 } 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "method": "bdev_raid_set_options", 00:20:34.205 "params": { 00:20:34.205 "process_window_size_kb": 1024, 00:20:34.205 "process_max_bandwidth_mb_sec": 0 00:20:34.205 } 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "method": "bdev_iscsi_set_options", 00:20:34.205 "params": { 00:20:34.205 "timeout_sec": 30 00:20:34.205 } 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "method": "bdev_nvme_set_options", 00:20:34.205 "params": { 00:20:34.205 "action_on_timeout": "none", 00:20:34.205 "timeout_us": 0, 00:20:34.205 "timeout_admin_us": 0, 00:20:34.205 "keep_alive_timeout_ms": 10000, 00:20:34.205 "arbitration_burst": 0, 00:20:34.205 "low_priority_weight": 0, 00:20:34.205 "medium_priority_weight": 0, 00:20:34.205 "high_priority_weight": 0, 00:20:34.205 "nvme_adminq_poll_period_us": 10000, 00:20:34.205 "nvme_ioq_poll_period_us": 0, 00:20:34.205 "io_queue_requests": 512, 00:20:34.205 "delay_cmd_submit": true, 00:20:34.205 "transport_retry_count": 4, 00:20:34.205 "bdev_retry_count": 3, 00:20:34.205 "transport_ack_timeout": 0, 00:20:34.205 "ctrlr_loss_timeout_sec": 0, 00:20:34.205 "reconnect_delay_sec": 0, 00:20:34.205 "fast_io_fail_timeout_sec": 0, 00:20:34.205 "disable_auto_failback": false, 00:20:34.205 "generate_uuids": false, 00:20:34.205 "transport_tos": 0, 00:20:34.205 "nvme_error_stat": false, 00:20:34.205 "rdma_srq_size": 0, 00:20:34.205 "io_path_stat": false, 00:20:34.205 "allow_accel_sequence": false, 00:20:34.205 "rdma_max_cq_size": 0, 00:20:34.205 "rdma_cm_event_timeout_ms": 0, 00:20:34.205 "dhchap_digests": [ 00:20:34.205 "sha256", 00:20:34.205 "sha384", 00:20:34.205 "sha512" 00:20:34.205 ], 00:20:34.205 "dhchap_dhgroups": [ 00:20:34.205 "null", 00:20:34.205 "ffdhe2048", 00:20:34.205 "ffdhe3072", 00:20:34.205 "ffdhe4096", 00:20:34.205 "ffdhe6144", 00:20:34.205 "ffdhe8192" 00:20:34.205 ] 00:20:34.205 } 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "method": "bdev_nvme_attach_controller", 00:20:34.205 "params": { 00:20:34.205 "name": "nvme0", 00:20:34.205 "trtype": "TCP", 00:20:34.205 "adrfam": "IPv4", 00:20:34.205 "traddr": "10.0.0.2", 00:20:34.205 "trsvcid": "4420", 00:20:34.205 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:34.205 "prchk_reftag": false, 00:20:34.205 "prchk_guard": false, 00:20:34.205 "ctrlr_loss_timeout_sec": 0, 00:20:34.205 "reconnect_delay_sec": 0, 00:20:34.205 "fast_io_fail_timeout_sec": 0, 00:20:34.205 "psk": "key0", 00:20:34.205 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:34.205 "hdgst": false, 00:20:34.205 "ddgst": false, 00:20:34.205 "multipath": "multipath" 00:20:34.205 } 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "method": "bdev_nvme_set_hotplug", 00:20:34.205 "params": { 00:20:34.205 "period_us": 100000, 00:20:34.205 "enable": false 00:20:34.205 } 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "method": "bdev_enable_histogram", 00:20:34.205 "params": { 00:20:34.205 "name": "nvme0n1", 00:20:34.205 "enable": true 00:20:34.205 } 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "method": "bdev_wait_for_examine" 00:20:34.205 } 00:20:34.205 ] 00:20:34.205 }, 00:20:34.205 { 00:20:34.205 "subsystem": "nbd", 00:20:34.205 "config": [] 00:20:34.205 } 00:20:34.205 ] 00:20:34.205 }' 00:20:34.205 [2024-11-05 04:31:47.655922] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:34.206 [2024-11-05 04:31:47.655980] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3019016 ] 00:20:34.206 [2024-11-05 04:31:47.728787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:34.206 [2024-11-05 04:31:47.769742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:34.467 [2024-11-05 04:31:47.904941] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:35.037 04:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:35.037 04:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:20:35.037 04:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:35.037 04:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:20:35.037 04:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.037 04:31:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:35.298 Running I/O for 1 seconds... 00:20:36.240 3767.00 IOPS, 14.71 MiB/s 00:20:36.240 Latency(us) 00:20:36.240 [2024-11-05T03:31:49.880Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.240 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:36.240 Verification LBA range: start 0x0 length 0x2000 00:20:36.240 nvme0n1 : 1.05 3698.49 14.45 0.00 0.00 33822.15 5925.55 74274.13 00:20:36.240 [2024-11-05T03:31:49.880Z] =================================================================================================================== 00:20:36.240 [2024-11-05T03:31:49.880Z] Total : 3698.49 14.45 0.00 0.00 33822.15 5925.55 74274.13 00:20:36.240 { 00:20:36.240 "results": [ 00:20:36.240 { 00:20:36.240 "job": "nvme0n1", 00:20:36.240 "core_mask": "0x2", 00:20:36.240 "workload": "verify", 00:20:36.240 "status": "finished", 00:20:36.240 "verify_range": { 00:20:36.240 "start": 0, 00:20:36.240 "length": 8192 00:20:36.240 }, 00:20:36.240 "queue_depth": 128, 00:20:36.240 "io_size": 4096, 00:20:36.240 "runtime": 1.053402, 00:20:36.240 "iops": 3698.49307291993, 00:20:36.240 "mibps": 14.447238566093477, 00:20:36.240 "io_failed": 0, 00:20:36.240 "io_timeout": 0, 00:20:36.240 "avg_latency_us": 33822.15227926078, 00:20:36.240 "min_latency_us": 5925.546666666667, 00:20:36.240 "max_latency_us": 74274.13333333333 00:20:36.240 } 00:20:36.240 ], 00:20:36.240 "core_count": 1 00:20:36.240 } 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:36.240 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:36.240 nvmf_trace.0 00:20:36.499 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:20:36.499 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3019016 00:20:36.499 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3019016 ']' 00:20:36.499 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3019016 00:20:36.499 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:36.499 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.499 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3019016 00:20:36.499 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:36.500 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:36.500 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3019016' 00:20:36.500 killing process with pid 3019016 00:20:36.500 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3019016 00:20:36.500 Received shutdown signal, test time was about 1.000000 seconds 00:20:36.500 00:20:36.500 Latency(us) 00:20:36.500 [2024-11-05T03:31:50.140Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:36.500 [2024-11-05T03:31:50.140Z] =================================================================================================================== 00:20:36.500 [2024-11-05T03:31:50.140Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:36.500 04:31:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3019016 00:20:36.500 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:36.500 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:36.500 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:20:36.500 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:36.500 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:20:36.500 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:36.500 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:36.500 rmmod nvme_tcp 00:20:36.500 rmmod nvme_fabrics 00:20:36.500 rmmod nvme_keyring 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 3018987 ']' 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 3018987 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 3018987 ']' 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 3018987 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3018987 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3018987' 00:20:36.760 killing process with pid 3018987 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 3018987 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 3018987 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:36.760 04:31:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.EPTSa5OLTr /tmp/tmp.UrHiR76Y6t /tmp/tmp.qzI2yU5L6n 00:20:39.305 00:20:39.305 real 1m22.959s 00:20:39.305 user 2m8.310s 00:20:39.305 sys 0m26.732s 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:39.305 ************************************ 00:20:39.305 END TEST nvmf_tls 00:20:39.305 ************************************ 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:39.305 ************************************ 00:20:39.305 START TEST nvmf_fips 00:20:39.305 ************************************ 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:39.305 * Looking for test storage... 00:20:39.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:39.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.305 --rc genhtml_branch_coverage=1 00:20:39.305 --rc genhtml_function_coverage=1 00:20:39.305 --rc genhtml_legend=1 00:20:39.305 --rc geninfo_all_blocks=1 00:20:39.305 --rc geninfo_unexecuted_blocks=1 00:20:39.305 00:20:39.305 ' 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:39.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.305 --rc genhtml_branch_coverage=1 00:20:39.305 --rc genhtml_function_coverage=1 00:20:39.305 --rc genhtml_legend=1 00:20:39.305 --rc geninfo_all_blocks=1 00:20:39.305 --rc geninfo_unexecuted_blocks=1 00:20:39.305 00:20:39.305 ' 00:20:39.305 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:39.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.306 --rc genhtml_branch_coverage=1 00:20:39.306 --rc genhtml_function_coverage=1 00:20:39.306 --rc genhtml_legend=1 00:20:39.306 --rc geninfo_all_blocks=1 00:20:39.306 --rc geninfo_unexecuted_blocks=1 00:20:39.306 00:20:39.306 ' 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:39.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:39.306 --rc genhtml_branch_coverage=1 00:20:39.306 --rc genhtml_function_coverage=1 00:20:39.306 --rc genhtml_legend=1 00:20:39.306 --rc geninfo_all_blocks=1 00:20:39.306 --rc geninfo_unexecuted_blocks=1 00:20:39.306 00:20:39.306 ' 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:39.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:20:39.306 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:20:39.307 Error setting digest 00:20:39.307 40C2CAF9A37F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:20:39.307 40C2CAF9A37F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:20:39.307 04:31:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:47.451 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:47.451 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:47.451 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:47.451 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.451 04:31:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.451 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.451 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.451 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:47.451 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.451 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.451 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.451 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:47.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:20:47.452 00:20:47.452 --- 10.0.0.2 ping statistics --- 00:20:47.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.452 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:20:47.452 00:20:47.452 --- 10.0.0.1 ping statistics --- 00:20:47.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.452 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=3023739 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 3023739 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3023739 ']' 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:47.452 04:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:47.452 [2024-11-05 04:32:00.367246] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:47.452 [2024-11-05 04:32:00.367318] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.452 [2024-11-05 04:32:00.465917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.452 [2024-11-05 04:32:00.516084] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.452 [2024-11-05 04:32:00.516137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.452 [2024-11-05 04:32:00.516146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.452 [2024-11-05 04:32:00.516153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.452 [2024-11-05 04:32:00.516165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.452 [2024-11-05 04:32:00.516925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.43L 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.43L 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.43L 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.43L 00:20:47.713 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:47.976 [2024-11-05 04:32:01.383034] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.976 [2024-11-05 04:32:01.399034] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.976 [2024-11-05 04:32:01.399358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:47.976 malloc0 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3024078 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3024078 /var/tmp/bdevperf.sock 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 3024078 ']' 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:47.976 04:32:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:47.976 [2024-11-05 04:32:01.538652] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:20:47.976 [2024-11-05 04:32:01.538727] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3024078 ] 00:20:47.976 [2024-11-05 04:32:01.601371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.236 [2024-11-05 04:32:01.637788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.808 04:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:48.808 04:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:20:48.808 04:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.43L 00:20:49.069 04:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:49.069 [2024-11-05 04:32:02.644532] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.330 TLSTESTn1 00:20:49.330 04:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:49.330 Running I/O for 10 seconds... 00:20:51.211 6055.00 IOPS, 23.65 MiB/s [2024-11-05T03:32:06.234Z] 5703.50 IOPS, 22.28 MiB/s [2024-11-05T03:32:07.175Z] 5488.67 IOPS, 21.44 MiB/s [2024-11-05T03:32:08.116Z] 5647.25 IOPS, 22.06 MiB/s [2024-11-05T03:32:09.058Z] 5714.20 IOPS, 22.32 MiB/s [2024-11-05T03:32:09.998Z] 5670.67 IOPS, 22.15 MiB/s [2024-11-05T03:32:10.939Z] 5786.14 IOPS, 22.60 MiB/s [2024-11-05T03:32:11.880Z] 5795.00 IOPS, 22.64 MiB/s [2024-11-05T03:32:13.267Z] 5881.89 IOPS, 22.98 MiB/s [2024-11-05T03:32:13.267Z] 5877.90 IOPS, 22.96 MiB/s 00:20:59.627 Latency(us) 00:20:59.627 [2024-11-05T03:32:13.267Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.627 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:59.627 Verification LBA range: start 0x0 length 0x2000 00:20:59.627 TLSTESTn1 : 10.05 5861.88 22.90 0.00 0.00 21774.29 4887.89 48278.19 00:20:59.627 [2024-11-05T03:32:13.267Z] =================================================================================================================== 00:20:59.627 [2024-11-05T03:32:13.267Z] Total : 5861.88 22.90 0.00 0.00 21774.29 4887.89 48278.19 00:20:59.627 { 00:20:59.627 "results": [ 00:20:59.627 { 00:20:59.627 "job": "TLSTESTn1", 00:20:59.627 "core_mask": "0x4", 00:20:59.627 "workload": "verify", 00:20:59.627 "status": "finished", 00:20:59.627 "verify_range": { 00:20:59.627 "start": 0, 00:20:59.627 "length": 8192 00:20:59.627 }, 00:20:59.627 "queue_depth": 128, 00:20:59.627 "io_size": 4096, 00:20:59.627 "runtime": 10.049173, 00:20:59.627 "iops": 5861.875400095112, 00:20:59.627 "mibps": 22.89795078162153, 00:20:59.627 "io_failed": 0, 00:20:59.627 "io_timeout": 0, 00:20:59.627 "avg_latency_us": 21774.292982045146, 00:20:59.627 "min_latency_us": 4887.893333333333, 00:20:59.627 "max_latency_us": 48278.18666666667 00:20:59.627 } 00:20:59.627 ], 00:20:59.627 "core_count": 1 00:20:59.627 } 00:20:59.627 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:59.628 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:59.628 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:20:59.628 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:20:59.628 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:20:59.628 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:59.628 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:20:59.628 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:20:59.628 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:20:59.628 04:32:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:59.628 nvmf_trace.0 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3024078 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3024078 ']' 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3024078 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3024078 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3024078' 00:20:59.628 killing process with pid 3024078 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3024078 00:20:59.628 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.628 00:20:59.628 Latency(us) 00:20:59.628 [2024-11-05T03:32:13.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.628 [2024-11-05T03:32:13.268Z] =================================================================================================================== 00:20:59.628 [2024-11-05T03:32:13.268Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3024078 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:59.628 rmmod nvme_tcp 00:20:59.628 rmmod nvme_fabrics 00:20:59.628 rmmod nvme_keyring 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 3023739 ']' 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 3023739 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 3023739 ']' 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 3023739 00:20:59.628 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3023739 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3023739' 00:20:59.888 killing process with pid 3023739 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 3023739 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 3023739 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:59.888 04:32:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.43L 00:21:02.433 00:21:02.433 real 0m23.022s 00:21:02.433 user 0m24.976s 00:21:02.433 sys 0m9.287s 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:02.433 ************************************ 00:21:02.433 END TEST nvmf_fips 00:21:02.433 ************************************ 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:02.433 ************************************ 00:21:02.433 START TEST nvmf_control_msg_list 00:21:02.433 ************************************ 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:21:02.433 * Looking for test storage... 00:21:02.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.433 --rc genhtml_branch_coverage=1 00:21:02.433 --rc genhtml_function_coverage=1 00:21:02.433 --rc genhtml_legend=1 00:21:02.433 --rc geninfo_all_blocks=1 00:21:02.433 --rc geninfo_unexecuted_blocks=1 00:21:02.433 00:21:02.433 ' 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.433 --rc genhtml_branch_coverage=1 00:21:02.433 --rc genhtml_function_coverage=1 00:21:02.433 --rc genhtml_legend=1 00:21:02.433 --rc geninfo_all_blocks=1 00:21:02.433 --rc geninfo_unexecuted_blocks=1 00:21:02.433 00:21:02.433 ' 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.433 --rc genhtml_branch_coverage=1 00:21:02.433 --rc genhtml_function_coverage=1 00:21:02.433 --rc genhtml_legend=1 00:21:02.433 --rc geninfo_all_blocks=1 00:21:02.433 --rc geninfo_unexecuted_blocks=1 00:21:02.433 00:21:02.433 ' 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:02.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:02.433 --rc genhtml_branch_coverage=1 00:21:02.433 --rc genhtml_function_coverage=1 00:21:02.433 --rc genhtml_legend=1 00:21:02.433 --rc geninfo_all_blocks=1 00:21:02.433 --rc geninfo_unexecuted_blocks=1 00:21:02.433 00:21:02.433 ' 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.433 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:02.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.434 04:32:15 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:10.577 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:10.577 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:10.577 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:10.578 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:10.578 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:10.578 04:32:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:10.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.596 ms 00:21:10.578 00:21:10.578 --- 10.0.0.2 ping statistics --- 00:21:10.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.578 rtt min/avg/max/mdev = 0.596/0.596/0.596/0.000 ms 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:21:10.578 00:21:10.578 --- 10.0.0.1 ping statistics --- 00:21:10.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.578 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=3030429 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 3030429 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 3030429 ']' 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.578 [2024-11-05 04:32:23.153599] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:21:10.578 [2024-11-05 04:32:23.153661] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.578 [2024-11-05 04:32:23.235841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.578 [2024-11-05 04:32:23.276789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.578 [2024-11-05 04:32:23.276825] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.578 [2024-11-05 04:32:23.276832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.578 [2024-11-05 04:32:23.276839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.578 [2024-11-05 04:32:23.276845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.578 [2024-11-05 04:32:23.277436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:10.578 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:21:10.579 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:10.579 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:10.579 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.579 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.579 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:10.579 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:10.579 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:21:10.579 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.579 04:32:23 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.579 [2024-11-05 04:32:23.999292] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.579 Malloc0 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:10.579 [2024-11-05 04:32:24.050194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3030777 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3030778 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3030779 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3030777 00:21:10.579 04:32:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:10.579 [2024-11-05 04:32:24.120641] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:10.579 [2024-11-05 04:32:24.150774] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:10.579 [2024-11-05 04:32:24.151058] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:11.966 Initializing NVMe Controllers 00:21:11.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:11.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:21:11.966 Initialization complete. Launching workers. 00:21:11.966 ======================================================== 00:21:11.966 Latency(us) 00:21:11.966 Device Information : IOPS MiB/s Average min max 00:21:11.966 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1696.00 6.62 589.33 242.22 802.94 00:21:11.966 ======================================================== 00:21:11.966 Total : 1696.00 6.62 589.33 242.22 802.94 00:21:11.966 00:21:11.966 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3030778 00:21:11.966 Initializing NVMe Controllers 00:21:11.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:11.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:21:11.966 Initialization complete. Launching workers. 00:21:11.966 ======================================================== 00:21:11.966 Latency(us) 00:21:11.966 Device Information : IOPS MiB/s Average min max 00:21:11.966 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40896.57 40782.57 41024.08 00:21:11.966 ======================================================== 00:21:11.966 Total : 25.00 0.10 40896.57 40782.57 41024.08 00:21:11.966 00:21:11.966 Initializing NVMe Controllers 00:21:11.966 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:11.966 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:21:11.966 Initialization complete. Launching workers. 00:21:11.966 ======================================================== 00:21:11.966 Latency(us) 00:21:11.966 Device Information : IOPS MiB/s Average min max 00:21:11.966 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40893.87 40673.91 40961.95 00:21:11.966 ======================================================== 00:21:11.967 Total : 25.00 0.10 40893.87 40673.91 40961.95 00:21:11.967 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3030779 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:11.967 rmmod nvme_tcp 00:21:11.967 rmmod nvme_fabrics 00:21:11.967 rmmod nvme_keyring 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 3030429 ']' 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 3030429 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 3030429 ']' 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 3030429 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3030429 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3030429' 00:21:11.967 killing process with pid 3030429 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 3030429 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 3030429 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:11.967 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:21:12.228 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:21:12.228 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:12.228 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:21:12.228 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:12.228 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:12.228 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.228 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.228 04:32:25 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.195 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:14.195 00:21:14.195 real 0m12.083s 00:21:14.195 user 0m7.929s 00:21:14.195 sys 0m6.251s 00:21:14.195 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:14.195 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:21:14.195 ************************************ 00:21:14.195 END TEST nvmf_control_msg_list 00:21:14.195 ************************************ 00:21:14.195 04:32:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:14.195 04:32:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:14.195 04:32:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:14.195 04:32:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:14.195 ************************************ 00:21:14.195 START TEST nvmf_wait_for_buf 00:21:14.195 ************************************ 00:21:14.195 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:21:14.479 * Looking for test storage... 00:21:14.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:14.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.479 --rc genhtml_branch_coverage=1 00:21:14.479 --rc genhtml_function_coverage=1 00:21:14.479 --rc genhtml_legend=1 00:21:14.479 --rc geninfo_all_blocks=1 00:21:14.479 --rc geninfo_unexecuted_blocks=1 00:21:14.479 00:21:14.479 ' 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:14.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.479 --rc genhtml_branch_coverage=1 00:21:14.479 --rc genhtml_function_coverage=1 00:21:14.479 --rc genhtml_legend=1 00:21:14.479 --rc geninfo_all_blocks=1 00:21:14.479 --rc geninfo_unexecuted_blocks=1 00:21:14.479 00:21:14.479 ' 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:14.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.479 --rc genhtml_branch_coverage=1 00:21:14.479 --rc genhtml_function_coverage=1 00:21:14.479 --rc genhtml_legend=1 00:21:14.479 --rc geninfo_all_blocks=1 00:21:14.479 --rc geninfo_unexecuted_blocks=1 00:21:14.479 00:21:14.479 ' 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:14.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:14.479 --rc genhtml_branch_coverage=1 00:21:14.479 --rc genhtml_function_coverage=1 00:21:14.479 --rc genhtml_legend=1 00:21:14.479 --rc geninfo_all_blocks=1 00:21:14.479 --rc geninfo_unexecuted_blocks=1 00:21:14.479 00:21:14.479 ' 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:14.479 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:14.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:14.480 04:32:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:14.480 04:32:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:14.480 04:32:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:14.480 04:32:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:21:14.480 04:32:28 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.673 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:22.674 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:22.674 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:22.674 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:22.674 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:22.674 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.674 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:21:22.674 00:21:22.674 --- 10.0.0.2 ping statistics --- 00:21:22.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.674 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.674 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.674 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:21:22.674 00:21:22.674 --- 10.0.0.1 ping statistics --- 00:21:22.674 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.674 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=3035127 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 3035127 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 3035127 ']' 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:22.674 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.675 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:22.675 04:32:35 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.675 [2024-11-05 04:32:35.503080] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:21:22.675 [2024-11-05 04:32:35.503150] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.675 [2024-11-05 04:32:35.587981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.675 [2024-11-05 04:32:35.627649] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.675 [2024-11-05 04:32:35.627687] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.675 [2024-11-05 04:32:35.627695] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.675 [2024-11-05 04:32:35.627702] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.675 [2024-11-05 04:32:35.627708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.675 [2024-11-05 04:32:35.628303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.675 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:22.675 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:21:22.675 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:22.675 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:22.675 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 Malloc0 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 [2024-11-05 04:32:36.447055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:22.936 [2024-11-05 04:32:36.483288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.936 04:32:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:23.197 [2024-11-05 04:32:36.589842] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:24.584 Initializing NVMe Controllers 00:21:24.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:21:24.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:21:24.584 Initialization complete. Launching workers. 00:21:24.584 ======================================================== 00:21:24.584 Latency(us) 00:21:24.584 Device Information : IOPS MiB/s Average min max 00:21:24.584 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32295.41 8000.40 63852.40 00:21:24.584 ======================================================== 00:21:24.584 Total : 129.00 16.12 32295.41 8000.40 63852.40 00:21:24.584 00:21:24.584 04:32:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:21:24.584 04:32:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:21:24.584 04:32:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:24.584 04:32:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:24.584 04:32:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:24.584 rmmod nvme_tcp 00:21:24.584 rmmod nvme_fabrics 00:21:24.584 rmmod nvme_keyring 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 3035127 ']' 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 3035127 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 3035127 ']' 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 3035127 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3035127 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3035127' 00:21:24.584 killing process with pid 3035127 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 3035127 00:21:24.584 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 3035127 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:24.845 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.846 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.846 04:32:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:26.764 04:32:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:26.764 00:21:26.764 real 0m12.586s 00:21:26.764 user 0m5.074s 00:21:26.764 sys 0m6.068s 00:21:26.764 04:32:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:26.764 04:32:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:21:26.764 ************************************ 00:21:26.764 END TEST nvmf_wait_for_buf 00:21:26.764 ************************************ 00:21:26.764 04:32:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:21:26.764 04:32:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:21:26.764 04:32:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:21:26.764 04:32:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:21:26.764 04:32:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:21:26.764 04:32:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.909 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:34.910 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:34.910 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:34.910 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:34.910 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:34.910 ************************************ 00:21:34.910 START TEST nvmf_perf_adq 00:21:34.910 ************************************ 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:21:34.910 * Looking for test storage... 00:21:34.910 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:34.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.910 --rc genhtml_branch_coverage=1 00:21:34.910 --rc genhtml_function_coverage=1 00:21:34.910 --rc genhtml_legend=1 00:21:34.910 --rc geninfo_all_blocks=1 00:21:34.910 --rc geninfo_unexecuted_blocks=1 00:21:34.910 00:21:34.910 ' 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:34.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.910 --rc genhtml_branch_coverage=1 00:21:34.910 --rc genhtml_function_coverage=1 00:21:34.910 --rc genhtml_legend=1 00:21:34.910 --rc geninfo_all_blocks=1 00:21:34.910 --rc geninfo_unexecuted_blocks=1 00:21:34.910 00:21:34.910 ' 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:34.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.910 --rc genhtml_branch_coverage=1 00:21:34.910 --rc genhtml_function_coverage=1 00:21:34.910 --rc genhtml_legend=1 00:21:34.910 --rc geninfo_all_blocks=1 00:21:34.910 --rc geninfo_unexecuted_blocks=1 00:21:34.910 00:21:34.910 ' 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:34.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:34.910 --rc genhtml_branch_coverage=1 00:21:34.910 --rc genhtml_function_coverage=1 00:21:34.910 --rc genhtml_legend=1 00:21:34.910 --rc geninfo_all_blocks=1 00:21:34.910 --rc geninfo_unexecuted_blocks=1 00:21:34.910 00:21:34.910 ' 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:34.910 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:34.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:34.911 04:32:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:41.500 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:41.500 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:41.500 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:41.500 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:41.501 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:41.501 04:32:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:42.445 04:32:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:44.357 04:32:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:49.646 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:49.646 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.646 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:49.647 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:49.647 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:49.647 04:33:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:49.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.616 ms 00:21:49.647 00:21:49.647 --- 10.0.0.2 ping statistics --- 00:21:49.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.647 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.242 ms 00:21:49.647 00:21:49.647 --- 10.0.0.1 ping statistics --- 00:21:49.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.647 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.647 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3045461 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3045461 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3045461 ']' 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:49.907 04:33:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:49.907 [2024-11-05 04:33:03.378332] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:21:49.907 [2024-11-05 04:33:03.378384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.907 [2024-11-05 04:33:03.456289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.907 [2024-11-05 04:33:03.494452] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.907 [2024-11-05 04:33:03.494485] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.907 [2024-11-05 04:33:03.494493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.907 [2024-11-05 04:33:03.494499] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.907 [2024-11-05 04:33:03.494505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.907 [2024-11-05 04:33:03.496019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.907 [2024-11-05 04:33:03.496134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.907 [2024-11-05 04:33:03.496291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.907 [2024-11-05 04:33:03.496292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.847 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:50.847 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:21:50.847 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.848 [2024-11-05 04:33:04.347066] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.848 Malloc1 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:50.848 [2024-11-05 04:33:04.429291] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3045741 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:50.848 04:33:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:53.396 "tick_rate": 2400000000, 00:21:53.396 "poll_groups": [ 00:21:53.396 { 00:21:53.396 "name": "nvmf_tgt_poll_group_000", 00:21:53.396 "admin_qpairs": 1, 00:21:53.396 "io_qpairs": 1, 00:21:53.396 "current_admin_qpairs": 1, 00:21:53.396 "current_io_qpairs": 1, 00:21:53.396 "pending_bdev_io": 0, 00:21:53.396 "completed_nvme_io": 19164, 00:21:53.396 "transports": [ 00:21:53.396 { 00:21:53.396 "trtype": "TCP" 00:21:53.396 } 00:21:53.396 ] 00:21:53.396 }, 00:21:53.396 { 00:21:53.396 "name": "nvmf_tgt_poll_group_001", 00:21:53.396 "admin_qpairs": 0, 00:21:53.396 "io_qpairs": 1, 00:21:53.396 "current_admin_qpairs": 0, 00:21:53.396 "current_io_qpairs": 1, 00:21:53.396 "pending_bdev_io": 0, 00:21:53.396 "completed_nvme_io": 27186, 00:21:53.396 "transports": [ 00:21:53.396 { 00:21:53.396 "trtype": "TCP" 00:21:53.396 } 00:21:53.396 ] 00:21:53.396 }, 00:21:53.396 { 00:21:53.396 "name": "nvmf_tgt_poll_group_002", 00:21:53.396 "admin_qpairs": 0, 00:21:53.396 "io_qpairs": 1, 00:21:53.396 "current_admin_qpairs": 0, 00:21:53.396 "current_io_qpairs": 1, 00:21:53.396 "pending_bdev_io": 0, 00:21:53.396 "completed_nvme_io": 20204, 00:21:53.396 "transports": [ 00:21:53.396 { 00:21:53.396 "trtype": "TCP" 00:21:53.396 } 00:21:53.396 ] 00:21:53.396 }, 00:21:53.396 { 00:21:53.396 "name": "nvmf_tgt_poll_group_003", 00:21:53.396 "admin_qpairs": 0, 00:21:53.396 "io_qpairs": 1, 00:21:53.396 "current_admin_qpairs": 0, 00:21:53.396 "current_io_qpairs": 1, 00:21:53.396 "pending_bdev_io": 0, 00:21:53.396 "completed_nvme_io": 19593, 00:21:53.396 "transports": [ 00:21:53.396 { 00:21:53.396 "trtype": "TCP" 00:21:53.396 } 00:21:53.396 ] 00:21:53.396 } 00:21:53.396 ] 00:21:53.396 }' 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:53.396 04:33:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3045741 00:22:01.535 Initializing NVMe Controllers 00:22:01.536 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:01.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:01.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:01.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:01.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:01.536 Initialization complete. Launching workers. 00:22:01.536 ======================================================== 00:22:01.536 Latency(us) 00:22:01.536 Device Information : IOPS MiB/s Average min max 00:22:01.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11042.90 43.14 5795.76 1959.69 9408.38 00:22:01.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 14318.10 55.93 4469.55 1295.27 9651.75 00:22:01.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13791.80 53.87 4640.01 1263.17 10932.11 00:22:01.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13372.00 52.23 4785.86 1230.97 11567.42 00:22:01.536 ======================================================== 00:22:01.536 Total : 52524.80 205.18 4873.66 1230.97 11567.42 00:22:01.536 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:01.536 rmmod nvme_tcp 00:22:01.536 rmmod nvme_fabrics 00:22:01.536 rmmod nvme_keyring 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3045461 ']' 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3045461 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3045461 ']' 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3045461 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3045461 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3045461' 00:22:01.536 killing process with pid 3045461 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3045461 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3045461 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:01.536 04:33:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.448 04:33:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:03.448 04:33:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:22:03.448 04:33:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:03.448 04:33:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:05.360 04:33:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:22:07.272 04:33:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:12.564 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:12.564 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:12.564 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:12.565 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:12.565 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:12.565 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:12.565 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.620 ms 00:22:12.565 00:22:12.565 --- 10.0.0.2 ping statistics --- 00:22:12.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.565 rtt min/avg/max/mdev = 0.620/0.620/0.620/0.000 ms 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:12.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:12.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:22:12.565 00:22:12.565 --- 10.0.0.1 ping statistics --- 00:22:12.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:12.565 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:22:12.565 04:33:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:12.565 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:12.565 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:12.565 net.core.busy_poll = 1 00:22:12.565 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:12.565 net.core.busy_read = 1 00:22:12.565 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:12.565 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:12.565 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:12.565 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:12.565 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=3050744 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 3050744 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 3050744 ']' 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:12.826 04:33:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:12.826 [2024-11-05 04:33:26.297632] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:22:12.826 [2024-11-05 04:33:26.297684] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.826 [2024-11-05 04:33:26.378044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:12.826 [2024-11-05 04:33:26.413704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.826 [2024-11-05 04:33:26.413740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.826 [2024-11-05 04:33:26.413752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.826 [2024-11-05 04:33:26.413759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.826 [2024-11-05 04:33:26.413765] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.826 [2024-11-05 04:33:26.415259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.826 [2024-11-05 04:33:26.415371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.826 [2024-11-05 04:33:26.415529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.826 [2024-11-05 04:33:26.415530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.770 [2024-11-05 04:33:27.253568] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.770 Malloc1 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:13.770 [2024-11-05 04:33:27.319096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3051093 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:22:13.770 04:33:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:22:16.311 "tick_rate": 2400000000, 00:22:16.311 "poll_groups": [ 00:22:16.311 { 00:22:16.311 "name": "nvmf_tgt_poll_group_000", 00:22:16.311 "admin_qpairs": 1, 00:22:16.311 "io_qpairs": 0, 00:22:16.311 "current_admin_qpairs": 1, 00:22:16.311 "current_io_qpairs": 0, 00:22:16.311 "pending_bdev_io": 0, 00:22:16.311 "completed_nvme_io": 0, 00:22:16.311 "transports": [ 00:22:16.311 { 00:22:16.311 "trtype": "TCP" 00:22:16.311 } 00:22:16.311 ] 00:22:16.311 }, 00:22:16.311 { 00:22:16.311 "name": "nvmf_tgt_poll_group_001", 00:22:16.311 "admin_qpairs": 0, 00:22:16.311 "io_qpairs": 4, 00:22:16.311 "current_admin_qpairs": 0, 00:22:16.311 "current_io_qpairs": 4, 00:22:16.311 "pending_bdev_io": 0, 00:22:16.311 "completed_nvme_io": 49855, 00:22:16.311 "transports": [ 00:22:16.311 { 00:22:16.311 "trtype": "TCP" 00:22:16.311 } 00:22:16.311 ] 00:22:16.311 }, 00:22:16.311 { 00:22:16.311 "name": "nvmf_tgt_poll_group_002", 00:22:16.311 "admin_qpairs": 0, 00:22:16.311 "io_qpairs": 0, 00:22:16.311 "current_admin_qpairs": 0, 00:22:16.311 "current_io_qpairs": 0, 00:22:16.311 "pending_bdev_io": 0, 00:22:16.311 "completed_nvme_io": 0, 00:22:16.311 "transports": [ 00:22:16.311 { 00:22:16.311 "trtype": "TCP" 00:22:16.311 } 00:22:16.311 ] 00:22:16.311 }, 00:22:16.311 { 00:22:16.311 "name": "nvmf_tgt_poll_group_003", 00:22:16.311 "admin_qpairs": 0, 00:22:16.311 "io_qpairs": 0, 00:22:16.311 "current_admin_qpairs": 0, 00:22:16.311 "current_io_qpairs": 0, 00:22:16.311 "pending_bdev_io": 0, 00:22:16.311 "completed_nvme_io": 0, 00:22:16.311 "transports": [ 00:22:16.311 { 00:22:16.311 "trtype": "TCP" 00:22:16.311 } 00:22:16.311 ] 00:22:16.311 } 00:22:16.311 ] 00:22:16.311 }' 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:22:16.311 04:33:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3051093 00:22:24.443 Initializing NVMe Controllers 00:22:24.443 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:24.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:24.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:24.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:24.443 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:24.443 Initialization complete. Launching workers. 00:22:24.443 ======================================================== 00:22:24.443 Latency(us) 00:22:24.443 Device Information : IOPS MiB/s Average min max 00:22:24.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6395.69 24.98 10007.12 1148.95 54156.76 00:22:24.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6531.49 25.51 9826.77 1166.53 56552.61 00:22:24.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 8027.19 31.36 7978.93 839.29 54960.56 00:22:24.443 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5712.99 22.32 11201.51 1388.57 56681.07 00:22:24.443 ======================================================== 00:22:24.443 Total : 26667.37 104.17 9608.31 839.29 56681.07 00:22:24.443 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.443 rmmod nvme_tcp 00:22:24.443 rmmod nvme_fabrics 00:22:24.443 rmmod nvme_keyring 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 3050744 ']' 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 3050744 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 3050744 ']' 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 3050744 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3050744 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3050744' 00:22:24.443 killing process with pid 3050744 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 3050744 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 3050744 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.443 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.444 04:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.356 04:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:26.356 04:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:22:26.356 00:22:26.356 real 0m52.611s 00:22:26.356 user 2m49.385s 00:22:26.356 sys 0m11.584s 00:22:26.356 04:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:26.356 04:33:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:26.356 ************************************ 00:22:26.356 END TEST nvmf_perf_adq 00:22:26.356 ************************************ 00:22:26.356 04:33:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:26.356 04:33:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:26.356 04:33:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.356 04:33:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:26.356 ************************************ 00:22:26.356 START TEST nvmf_shutdown 00:22:26.356 ************************************ 00:22:26.356 04:33:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:26.618 * Looking for test storage... 00:22:26.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:26.618 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.619 --rc genhtml_branch_coverage=1 00:22:26.619 --rc genhtml_function_coverage=1 00:22:26.619 --rc genhtml_legend=1 00:22:26.619 --rc geninfo_all_blocks=1 00:22:26.619 --rc geninfo_unexecuted_blocks=1 00:22:26.619 00:22:26.619 ' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.619 --rc genhtml_branch_coverage=1 00:22:26.619 --rc genhtml_function_coverage=1 00:22:26.619 --rc genhtml_legend=1 00:22:26.619 --rc geninfo_all_blocks=1 00:22:26.619 --rc geninfo_unexecuted_blocks=1 00:22:26.619 00:22:26.619 ' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.619 --rc genhtml_branch_coverage=1 00:22:26.619 --rc genhtml_function_coverage=1 00:22:26.619 --rc genhtml_legend=1 00:22:26.619 --rc geninfo_all_blocks=1 00:22:26.619 --rc geninfo_unexecuted_blocks=1 00:22:26.619 00:22:26.619 ' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:26.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:26.619 --rc genhtml_branch_coverage=1 00:22:26.619 --rc genhtml_function_coverage=1 00:22:26.619 --rc genhtml_legend=1 00:22:26.619 --rc geninfo_all_blocks=1 00:22:26.619 --rc geninfo_unexecuted_blocks=1 00:22:26.619 00:22:26.619 ' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:26.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:26.619 ************************************ 00:22:26.619 START TEST nvmf_shutdown_tc1 00:22:26.619 ************************************ 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:26.619 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:26.620 04:33:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:34.886 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:34.886 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.886 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:34.887 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:34.887 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.887 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.887 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:22:34.887 00:22:34.887 --- 10.0.0.2 ping statistics --- 00:22:34.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.887 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.289 ms 00:22:34.887 00:22:34.887 --- 10.0.0.1 ping statistics --- 00:22:34.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.887 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=3057246 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 3057246 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3057246 ']' 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:34.887 04:33:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:34.887 [2024-11-05 04:33:47.730395] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:22:34.887 [2024-11-05 04:33:47.730467] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.887 [2024-11-05 04:33:47.831757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:34.887 [2024-11-05 04:33:47.887261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.887 [2024-11-05 04:33:47.887323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.887 [2024-11-05 04:33:47.887331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.887 [2024-11-05 04:33:47.887339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.887 [2024-11-05 04:33:47.887346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.887 [2024-11-05 04:33:47.889366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.887 [2024-11-05 04:33:47.889537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:34.887 [2024-11-05 04:33:47.889703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:34.887 [2024-11-05 04:33:47.889703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.148 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:35.148 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:35.148 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:35.148 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.148 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.149 [2024-11-05 04:33:48.585098] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.149 04:33:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.149 Malloc1 00:22:35.149 [2024-11-05 04:33:48.701507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.149 Malloc2 00:22:35.149 Malloc3 00:22:35.409 Malloc4 00:22:35.409 Malloc5 00:22:35.409 Malloc6 00:22:35.409 Malloc7 00:22:35.409 Malloc8 00:22:35.409 Malloc9 00:22:35.409 Malloc10 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3057613 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3057613 /var/tmp/bdevperf.sock 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 3057613 ']' 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:35.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.670 { 00:22:35.670 "params": { 00:22:35.670 "name": "Nvme$subsystem", 00:22:35.670 "trtype": "$TEST_TRANSPORT", 00:22:35.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.670 "adrfam": "ipv4", 00:22:35.670 "trsvcid": "$NVMF_PORT", 00:22:35.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.670 "hdgst": ${hdgst:-false}, 00:22:35.670 "ddgst": ${ddgst:-false} 00:22:35.670 }, 00:22:35.670 "method": "bdev_nvme_attach_controller" 00:22:35.670 } 00:22:35.670 EOF 00:22:35.670 )") 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.670 { 00:22:35.670 "params": { 00:22:35.670 "name": "Nvme$subsystem", 00:22:35.670 "trtype": "$TEST_TRANSPORT", 00:22:35.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.670 "adrfam": "ipv4", 00:22:35.670 "trsvcid": "$NVMF_PORT", 00:22:35.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.670 "hdgst": ${hdgst:-false}, 00:22:35.670 "ddgst": ${ddgst:-false} 00:22:35.670 }, 00:22:35.670 "method": "bdev_nvme_attach_controller" 00:22:35.670 } 00:22:35.670 EOF 00:22:35.670 )") 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.670 { 00:22:35.670 "params": { 00:22:35.670 "name": "Nvme$subsystem", 00:22:35.670 "trtype": "$TEST_TRANSPORT", 00:22:35.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.670 "adrfam": "ipv4", 00:22:35.670 "trsvcid": "$NVMF_PORT", 00:22:35.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.670 "hdgst": ${hdgst:-false}, 00:22:35.670 "ddgst": ${ddgst:-false} 00:22:35.670 }, 00:22:35.670 "method": "bdev_nvme_attach_controller" 00:22:35.670 } 00:22:35.670 EOF 00:22:35.670 )") 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.670 { 00:22:35.670 "params": { 00:22:35.670 "name": "Nvme$subsystem", 00:22:35.670 "trtype": "$TEST_TRANSPORT", 00:22:35.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.670 "adrfam": "ipv4", 00:22:35.670 "trsvcid": "$NVMF_PORT", 00:22:35.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.670 "hdgst": ${hdgst:-false}, 00:22:35.670 "ddgst": ${ddgst:-false} 00:22:35.670 }, 00:22:35.670 "method": "bdev_nvme_attach_controller" 00:22:35.670 } 00:22:35.670 EOF 00:22:35.670 )") 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.670 { 00:22:35.670 "params": { 00:22:35.670 "name": "Nvme$subsystem", 00:22:35.670 "trtype": "$TEST_TRANSPORT", 00:22:35.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.670 "adrfam": "ipv4", 00:22:35.670 "trsvcid": "$NVMF_PORT", 00:22:35.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.670 "hdgst": ${hdgst:-false}, 00:22:35.670 "ddgst": ${ddgst:-false} 00:22:35.670 }, 00:22:35.670 "method": "bdev_nvme_attach_controller" 00:22:35.670 } 00:22:35.670 EOF 00:22:35.670 )") 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.670 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.670 { 00:22:35.670 "params": { 00:22:35.670 "name": "Nvme$subsystem", 00:22:35.670 "trtype": "$TEST_TRANSPORT", 00:22:35.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.670 "adrfam": "ipv4", 00:22:35.670 "trsvcid": "$NVMF_PORT", 00:22:35.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.670 "hdgst": ${hdgst:-false}, 00:22:35.670 "ddgst": ${ddgst:-false} 00:22:35.670 }, 00:22:35.670 "method": "bdev_nvme_attach_controller" 00:22:35.670 } 00:22:35.670 EOF 00:22:35.671 )") 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.671 [2024-11-05 04:33:49.150494] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:22:35.671 [2024-11-05 04:33:49.150547] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.671 { 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme$subsystem", 00:22:35.671 "trtype": "$TEST_TRANSPORT", 00:22:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "$NVMF_PORT", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.671 "hdgst": ${hdgst:-false}, 00:22:35.671 "ddgst": ${ddgst:-false} 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 } 00:22:35.671 EOF 00:22:35.671 )") 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.671 { 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme$subsystem", 00:22:35.671 "trtype": "$TEST_TRANSPORT", 00:22:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "$NVMF_PORT", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.671 "hdgst": ${hdgst:-false}, 00:22:35.671 "ddgst": ${ddgst:-false} 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 } 00:22:35.671 EOF 00:22:35.671 )") 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.671 { 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme$subsystem", 00:22:35.671 "trtype": "$TEST_TRANSPORT", 00:22:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "$NVMF_PORT", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.671 "hdgst": ${hdgst:-false}, 00:22:35.671 "ddgst": ${ddgst:-false} 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 } 00:22:35.671 EOF 00:22:35.671 )") 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:35.671 { 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme$subsystem", 00:22:35.671 "trtype": "$TEST_TRANSPORT", 00:22:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "$NVMF_PORT", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:35.671 "hdgst": ${hdgst:-false}, 00:22:35.671 "ddgst": ${ddgst:-false} 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 } 00:22:35.671 EOF 00:22:35.671 )") 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:35.671 04:33:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme1", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 },{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme2", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 },{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme3", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 },{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme4", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 },{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme5", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 },{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme6", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 },{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme7", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 },{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme8", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 },{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme9", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 },{ 00:22:35.671 "params": { 00:22:35.671 "name": "Nvme10", 00:22:35.671 "trtype": "tcp", 00:22:35.671 "traddr": "10.0.0.2", 00:22:35.671 "adrfam": "ipv4", 00:22:35.671 "trsvcid": "4420", 00:22:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:35.671 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:35.671 "hdgst": false, 00:22:35.671 "ddgst": false 00:22:35.671 }, 00:22:35.671 "method": "bdev_nvme_attach_controller" 00:22:35.671 }' 00:22:35.671 [2024-11-05 04:33:49.224154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.671 [2024-11-05 04:33:49.260698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:37.584 04:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:37.584 04:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:22:37.584 04:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:37.584 04:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.584 04:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:37.584 04:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.584 04:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3057613 00:22:37.584 04:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:22:37.584 04:33:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:22:38.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3057613 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:22:38.526 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3057246 00:22:38.526 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:22:38.526 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:38.526 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:22:38.526 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:22:38.526 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.526 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.526 { 00:22:38.526 "params": { 00:22:38.526 "name": "Nvme$subsystem", 00:22:38.526 "trtype": "$TEST_TRANSPORT", 00:22:38.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.526 "adrfam": "ipv4", 00:22:38.526 "trsvcid": "$NVMF_PORT", 00:22:38.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.526 "hdgst": ${hdgst:-false}, 00:22:38.526 "ddgst": ${ddgst:-false} 00:22:38.526 }, 00:22:38.526 "method": "bdev_nvme_attach_controller" 00:22:38.526 } 00:22:38.526 EOF 00:22:38.526 )") 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 04:33:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 [2024-11-05 04:33:52.013838] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:22:38.527 [2024-11-05 04:33:52.013894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3058306 ] 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:38.527 { 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme$subsystem", 00:22:38.527 "trtype": "$TEST_TRANSPORT", 00:22:38.527 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "$NVMF_PORT", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:38.527 "hdgst": ${hdgst:-false}, 00:22:38.527 "ddgst": ${ddgst:-false} 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 } 00:22:38.527 EOF 00:22:38.527 )") 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:22:38.527 04:33:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme1", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.527 "params": { 00:22:38.527 "name": "Nvme2", 00:22:38.527 "trtype": "tcp", 00:22:38.527 "traddr": "10.0.0.2", 00:22:38.527 "adrfam": "ipv4", 00:22:38.527 "trsvcid": "4420", 00:22:38.527 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.527 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:38.527 "hdgst": false, 00:22:38.527 "ddgst": false 00:22:38.527 }, 00:22:38.527 "method": "bdev_nvme_attach_controller" 00:22:38.527 },{ 00:22:38.528 "params": { 00:22:38.528 "name": "Nvme3", 00:22:38.528 "trtype": "tcp", 00:22:38.528 "traddr": "10.0.0.2", 00:22:38.528 "adrfam": "ipv4", 00:22:38.528 "trsvcid": "4420", 00:22:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:38.528 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:38.528 "hdgst": false, 00:22:38.528 "ddgst": false 00:22:38.528 }, 00:22:38.528 "method": "bdev_nvme_attach_controller" 00:22:38.528 },{ 00:22:38.528 "params": { 00:22:38.528 "name": "Nvme4", 00:22:38.528 "trtype": "tcp", 00:22:38.528 "traddr": "10.0.0.2", 00:22:38.528 "adrfam": "ipv4", 00:22:38.528 "trsvcid": "4420", 00:22:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:38.528 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:38.528 "hdgst": false, 00:22:38.528 "ddgst": false 00:22:38.528 }, 00:22:38.528 "method": "bdev_nvme_attach_controller" 00:22:38.528 },{ 00:22:38.528 "params": { 00:22:38.528 "name": "Nvme5", 00:22:38.528 "trtype": "tcp", 00:22:38.528 "traddr": "10.0.0.2", 00:22:38.528 "adrfam": "ipv4", 00:22:38.528 "trsvcid": "4420", 00:22:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:38.528 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:38.528 "hdgst": false, 00:22:38.528 "ddgst": false 00:22:38.528 }, 00:22:38.528 "method": "bdev_nvme_attach_controller" 00:22:38.528 },{ 00:22:38.528 "params": { 00:22:38.528 "name": "Nvme6", 00:22:38.528 "trtype": "tcp", 00:22:38.528 "traddr": "10.0.0.2", 00:22:38.528 "adrfam": "ipv4", 00:22:38.528 "trsvcid": "4420", 00:22:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:38.528 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:38.528 "hdgst": false, 00:22:38.528 "ddgst": false 00:22:38.528 }, 00:22:38.528 "method": "bdev_nvme_attach_controller" 00:22:38.528 },{ 00:22:38.528 "params": { 00:22:38.528 "name": "Nvme7", 00:22:38.528 "trtype": "tcp", 00:22:38.528 "traddr": "10.0.0.2", 00:22:38.528 "adrfam": "ipv4", 00:22:38.528 "trsvcid": "4420", 00:22:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:38.528 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:38.528 "hdgst": false, 00:22:38.528 "ddgst": false 00:22:38.528 }, 00:22:38.528 "method": "bdev_nvme_attach_controller" 00:22:38.528 },{ 00:22:38.528 "params": { 00:22:38.528 "name": "Nvme8", 00:22:38.528 "trtype": "tcp", 00:22:38.528 "traddr": "10.0.0.2", 00:22:38.528 "adrfam": "ipv4", 00:22:38.528 "trsvcid": "4420", 00:22:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:38.528 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:38.528 "hdgst": false, 00:22:38.528 "ddgst": false 00:22:38.528 }, 00:22:38.528 "method": "bdev_nvme_attach_controller" 00:22:38.528 },{ 00:22:38.528 "params": { 00:22:38.528 "name": "Nvme9", 00:22:38.528 "trtype": "tcp", 00:22:38.528 "traddr": "10.0.0.2", 00:22:38.528 "adrfam": "ipv4", 00:22:38.528 "trsvcid": "4420", 00:22:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:38.528 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:38.528 "hdgst": false, 00:22:38.528 "ddgst": false 00:22:38.528 }, 00:22:38.528 "method": "bdev_nvme_attach_controller" 00:22:38.528 },{ 00:22:38.528 "params": { 00:22:38.528 "name": "Nvme10", 00:22:38.528 "trtype": "tcp", 00:22:38.528 "traddr": "10.0.0.2", 00:22:38.528 "adrfam": "ipv4", 00:22:38.528 "trsvcid": "4420", 00:22:38.528 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:38.528 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:38.528 "hdgst": false, 00:22:38.528 "ddgst": false 00:22:38.528 }, 00:22:38.528 "method": "bdev_nvme_attach_controller" 00:22:38.528 }' 00:22:38.528 [2024-11-05 04:33:52.086085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.528 [2024-11-05 04:33:52.122172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.911 Running I/O for 1 seconds... 00:22:41.293 1933.00 IOPS, 120.81 MiB/s 00:22:41.293 Latency(us) 00:22:41.293 [2024-11-05T03:33:54.933Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.293 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme1n1 : 1.13 226.61 14.16 0.00 0.00 279226.88 22173.01 248162.99 00:22:41.293 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme2n1 : 1.13 226.38 14.15 0.00 0.00 274641.71 19660.80 249910.61 00:22:41.293 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme3n1 : 1.05 246.52 15.41 0.00 0.00 246647.26 3686.40 253405.87 00:22:41.293 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme4n1 : 1.12 227.90 14.24 0.00 0.00 263369.81 14964.05 255153.49 00:22:41.293 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme5n1 : 1.11 230.39 14.40 0.00 0.00 255679.57 16711.68 248162.99 00:22:41.293 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme6n1 : 1.12 233.78 14.61 0.00 0.00 246581.58 2921.81 239424.85 00:22:41.293 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme7n1 : 1.18 270.85 16.93 0.00 0.00 210784.09 15728.64 248162.99 00:22:41.293 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme8n1 : 1.19 273.01 17.06 0.00 0.00 205028.65 2075.31 242920.11 00:22:41.293 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme9n1 : 1.20 266.37 16.65 0.00 0.00 206957.82 11304.96 244667.73 00:22:41.293 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:41.293 Verification LBA range: start 0x0 length 0x400 00:22:41.293 Nvme10n1 : 1.19 267.95 16.75 0.00 0.00 201790.12 6198.61 267386.88 00:22:41.293 [2024-11-05T03:33:54.933Z] =================================================================================================================== 00:22:41.293 [2024-11-05T03:33:54.933Z] Total : 2469.76 154.36 0.00 0.00 236066.67 2075.31 267386.88 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.293 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.293 rmmod nvme_tcp 00:22:41.293 rmmod nvme_fabrics 00:22:41.293 rmmod nvme_keyring 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 3057246 ']' 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 3057246 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 3057246 ']' 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 3057246 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:41.554 04:33:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3057246 00:22:41.554 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:41.554 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:41.554 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3057246' 00:22:41.554 killing process with pid 3057246 00:22:41.554 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 3057246 00:22:41.554 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 3057246 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.815 04:33:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.728 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:43.728 00:22:43.728 real 0m17.085s 00:22:43.728 user 0m35.944s 00:22:43.728 sys 0m6.691s 00:22:43.728 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:43.728 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:43.728 ************************************ 00:22:43.728 END TEST nvmf_shutdown_tc1 00:22:43.728 ************************************ 00:22:43.728 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:22:43.728 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:43.728 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:43.728 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:43.988 ************************************ 00:22:43.988 START TEST nvmf_shutdown_tc2 00:22:43.988 ************************************ 00:22:43.988 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:22:43.988 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:22:43.988 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:43.988 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:43.988 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:43.988 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:43.988 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:43.988 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:43.989 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:43.989 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:43.989 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:43.989 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:43.989 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:43.990 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:44.250 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:44.250 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:44.250 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:44.250 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:44.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:44.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.576 ms 00:22:44.250 00:22:44.250 --- 10.0.0.2 ping statistics --- 00:22:44.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.250 rtt min/avg/max/mdev = 0.576/0.576/0.576/0.000 ms 00:22:44.250 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:44.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:44.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:22:44.250 00:22:44.250 --- 10.0.0.1 ping statistics --- 00:22:44.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:44.250 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:44.250 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:44.250 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:22:44.250 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3059426 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3059426 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3059426 ']' 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:44.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:44.251 04:33:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:44.251 [2024-11-05 04:33:57.836788] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:22:44.251 [2024-11-05 04:33:57.836853] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:44.512 [2024-11-05 04:33:57.931425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:44.512 [2024-11-05 04:33:57.965691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:44.512 [2024-11-05 04:33:57.965723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:44.512 [2024-11-05 04:33:57.965729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:44.512 [2024-11-05 04:33:57.965734] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:44.512 [2024-11-05 04:33:57.965738] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:44.512 [2024-11-05 04:33:57.967276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:44.512 [2024-11-05 04:33:57.967310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:44.512 [2024-11-05 04:33:57.967455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.512 [2024-11-05 04:33:57.967457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:45.083 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.083 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:45.083 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:45.083 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.084 [2024-11-05 04:33:58.677931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.084 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.344 04:33:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.344 Malloc1 00:22:45.344 [2024-11-05 04:33:58.793530] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:45.344 Malloc2 00:22:45.344 Malloc3 00:22:45.344 Malloc4 00:22:45.344 Malloc5 00:22:45.344 Malloc6 00:22:45.605 Malloc7 00:22:45.605 Malloc8 00:22:45.605 Malloc9 00:22:45.605 Malloc10 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3059806 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3059806 /var/tmp/bdevperf.sock 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3059806 ']' 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:45.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.605 { 00:22:45.605 "params": { 00:22:45.605 "name": "Nvme$subsystem", 00:22:45.605 "trtype": "$TEST_TRANSPORT", 00:22:45.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.605 "adrfam": "ipv4", 00:22:45.605 "trsvcid": "$NVMF_PORT", 00:22:45.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.605 "hdgst": ${hdgst:-false}, 00:22:45.605 "ddgst": ${ddgst:-false} 00:22:45.605 }, 00:22:45.605 "method": "bdev_nvme_attach_controller" 00:22:45.605 } 00:22:45.605 EOF 00:22:45.605 )") 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.605 { 00:22:45.605 "params": { 00:22:45.605 "name": "Nvme$subsystem", 00:22:45.605 "trtype": "$TEST_TRANSPORT", 00:22:45.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.605 "adrfam": "ipv4", 00:22:45.605 "trsvcid": "$NVMF_PORT", 00:22:45.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.605 "hdgst": ${hdgst:-false}, 00:22:45.605 "ddgst": ${ddgst:-false} 00:22:45.605 }, 00:22:45.605 "method": "bdev_nvme_attach_controller" 00:22:45.605 } 00:22:45.605 EOF 00:22:45.605 )") 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.605 { 00:22:45.605 "params": { 00:22:45.605 "name": "Nvme$subsystem", 00:22:45.605 "trtype": "$TEST_TRANSPORT", 00:22:45.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.605 "adrfam": "ipv4", 00:22:45.605 "trsvcid": "$NVMF_PORT", 00:22:45.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.605 "hdgst": ${hdgst:-false}, 00:22:45.605 "ddgst": ${ddgst:-false} 00:22:45.605 }, 00:22:45.605 "method": "bdev_nvme_attach_controller" 00:22:45.605 } 00:22:45.605 EOF 00:22:45.605 )") 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.605 { 00:22:45.605 "params": { 00:22:45.605 "name": "Nvme$subsystem", 00:22:45.605 "trtype": "$TEST_TRANSPORT", 00:22:45.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.605 "adrfam": "ipv4", 00:22:45.605 "trsvcid": "$NVMF_PORT", 00:22:45.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.605 "hdgst": ${hdgst:-false}, 00:22:45.605 "ddgst": ${ddgst:-false} 00:22:45.605 }, 00:22:45.605 "method": "bdev_nvme_attach_controller" 00:22:45.605 } 00:22:45.605 EOF 00:22:45.605 )") 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.605 { 00:22:45.605 "params": { 00:22:45.605 "name": "Nvme$subsystem", 00:22:45.605 "trtype": "$TEST_TRANSPORT", 00:22:45.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.605 "adrfam": "ipv4", 00:22:45.605 "trsvcid": "$NVMF_PORT", 00:22:45.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.605 "hdgst": ${hdgst:-false}, 00:22:45.605 "ddgst": ${ddgst:-false} 00:22:45.605 }, 00:22:45.605 "method": "bdev_nvme_attach_controller" 00:22:45.605 } 00:22:45.605 EOF 00:22:45.605 )") 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.605 { 00:22:45.605 "params": { 00:22:45.605 "name": "Nvme$subsystem", 00:22:45.605 "trtype": "$TEST_TRANSPORT", 00:22:45.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.605 "adrfam": "ipv4", 00:22:45.605 "trsvcid": "$NVMF_PORT", 00:22:45.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.605 "hdgst": ${hdgst:-false}, 00:22:45.605 "ddgst": ${ddgst:-false} 00:22:45.605 }, 00:22:45.605 "method": "bdev_nvme_attach_controller" 00:22:45.605 } 00:22:45.605 EOF 00:22:45.605 )") 00:22:45.605 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.866 [2024-11-05 04:33:59.247287] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:22:45.866 [2024-11-05 04:33:59.247342] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3059806 ] 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.866 { 00:22:45.866 "params": { 00:22:45.866 "name": "Nvme$subsystem", 00:22:45.866 "trtype": "$TEST_TRANSPORT", 00:22:45.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.866 "adrfam": "ipv4", 00:22:45.866 "trsvcid": "$NVMF_PORT", 00:22:45.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.866 "hdgst": ${hdgst:-false}, 00:22:45.866 "ddgst": ${ddgst:-false} 00:22:45.866 }, 00:22:45.866 "method": "bdev_nvme_attach_controller" 00:22:45.866 } 00:22:45.866 EOF 00:22:45.866 )") 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.866 { 00:22:45.866 "params": { 00:22:45.866 "name": "Nvme$subsystem", 00:22:45.866 "trtype": "$TEST_TRANSPORT", 00:22:45.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.866 "adrfam": "ipv4", 00:22:45.866 "trsvcid": "$NVMF_PORT", 00:22:45.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.866 "hdgst": ${hdgst:-false}, 00:22:45.866 "ddgst": ${ddgst:-false} 00:22:45.866 }, 00:22:45.866 "method": "bdev_nvme_attach_controller" 00:22:45.866 } 00:22:45.866 EOF 00:22:45.866 )") 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.866 { 00:22:45.866 "params": { 00:22:45.866 "name": "Nvme$subsystem", 00:22:45.866 "trtype": "$TEST_TRANSPORT", 00:22:45.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.866 "adrfam": "ipv4", 00:22:45.866 "trsvcid": "$NVMF_PORT", 00:22:45.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.866 "hdgst": ${hdgst:-false}, 00:22:45.866 "ddgst": ${ddgst:-false} 00:22:45.866 }, 00:22:45.866 "method": "bdev_nvme_attach_controller" 00:22:45.866 } 00:22:45.866 EOF 00:22:45.866 )") 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:45.866 { 00:22:45.866 "params": { 00:22:45.866 "name": "Nvme$subsystem", 00:22:45.866 "trtype": "$TEST_TRANSPORT", 00:22:45.866 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:45.866 "adrfam": "ipv4", 00:22:45.866 "trsvcid": "$NVMF_PORT", 00:22:45.866 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:45.866 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:45.866 "hdgst": ${hdgst:-false}, 00:22:45.866 "ddgst": ${ddgst:-false} 00:22:45.866 }, 00:22:45.866 "method": "bdev_nvme_attach_controller" 00:22:45.866 } 00:22:45.866 EOF 00:22:45.866 )") 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:22:45.866 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:22:45.867 04:33:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme1", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 },{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme2", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 },{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme3", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 },{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme4", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 },{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme5", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 },{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme6", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 },{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme7", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 },{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme8", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 },{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme9", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 },{ 00:22:45.867 "params": { 00:22:45.867 "name": "Nvme10", 00:22:45.867 "trtype": "tcp", 00:22:45.867 "traddr": "10.0.0.2", 00:22:45.867 "adrfam": "ipv4", 00:22:45.867 "trsvcid": "4420", 00:22:45.867 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:45.867 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:45.867 "hdgst": false, 00:22:45.867 "ddgst": false 00:22:45.867 }, 00:22:45.867 "method": "bdev_nvme_attach_controller" 00:22:45.867 }' 00:22:45.867 [2024-11-05 04:33:59.318896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.867 [2024-11-05 04:33:59.355410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.250 Running I/O for 10 seconds... 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.250 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.251 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:47.251 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:47.251 04:34:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=71 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 71 -ge 100 ']' 00:22:47.510 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=141 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 141 -ge 100 ']' 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3059806 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3059806 ']' 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3059806 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:47.770 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3059806 00:22:48.030 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:48.030 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:48.030 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3059806' 00:22:48.030 killing process with pid 3059806 00:22:48.030 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3059806 00:22:48.030 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3059806 00:22:48.030 Received shutdown signal, test time was about 0.977942 seconds 00:22:48.030 00:22:48.030 Latency(us) 00:22:48.030 [2024-11-05T03:34:01.670Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:48.030 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.030 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme1n1 : 0.97 268.77 16.80 0.00 0.00 235052.64 7645.87 242920.11 00:22:48.031 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.031 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme2n1 : 0.96 199.54 12.47 0.00 0.00 310555.59 17913.17 281367.89 00:22:48.031 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.031 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme3n1 : 0.96 269.69 16.86 0.00 0.00 223372.73 6990.51 246415.36 00:22:48.031 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.031 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme4n1 : 0.96 266.73 16.67 0.00 0.00 222359.04 17148.59 246415.36 00:22:48.031 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.031 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme5n1 : 0.94 204.47 12.78 0.00 0.00 283467.09 19005.44 248162.99 00:22:48.031 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.031 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme6n1 : 0.95 201.72 12.61 0.00 0.00 281423.93 21299.20 248162.99 00:22:48.031 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.031 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme7n1 : 0.97 263.03 16.44 0.00 0.00 211174.40 16493.23 249910.61 00:22:48.031 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.031 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme8n1 : 0.97 262.78 16.42 0.00 0.00 206557.65 23702.19 249910.61 00:22:48.031 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.031 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme9n1 : 0.95 202.48 12.65 0.00 0.00 260660.05 19442.35 248162.99 00:22:48.031 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:48.031 Verification LBA range: start 0x0 length 0x400 00:22:48.031 Nvme10n1 : 0.98 262.01 16.38 0.00 0.00 197742.72 14636.37 228939.09 00:22:48.031 [2024-11-05T03:34:01.671Z] =================================================================================================================== 00:22:48.031 [2024-11-05T03:34:01.671Z] Total : 2401.21 150.08 0.00 0.00 238671.51 6990.51 281367.89 00:22:48.031 04:34:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3059426 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:49.413 rmmod nvme_tcp 00:22:49.413 rmmod nvme_fabrics 00:22:49.413 rmmod nvme_keyring 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 3059426 ']' 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 3059426 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 3059426 ']' 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 3059426 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3059426 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3059426' 00:22:49.413 killing process with pid 3059426 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 3059426 00:22:49.413 04:34:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 3059426 00:22:49.413 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:49.413 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:49.413 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:49.413 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:49.413 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:49.413 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:49.413 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:49.674 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:49.674 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:49.674 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.674 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.674 04:34:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:51.585 00:22:51.585 real 0m7.715s 00:22:51.585 user 0m22.930s 00:22:51.585 sys 0m1.300s 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:51.585 ************************************ 00:22:51.585 END TEST nvmf_shutdown_tc2 00:22:51.585 ************************************ 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:51.585 ************************************ 00:22:51.585 START TEST nvmf_shutdown_tc3 00:22:51.585 ************************************ 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:51.585 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:51.846 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.846 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.846 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.846 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.846 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:51.847 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:51.847 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:51.847 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:51.847 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.847 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.109 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.109 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.600 ms 00:22:52.109 00:22:52.109 --- 10.0.0.2 ping statistics --- 00:22:52.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.109 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.109 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.109 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:22:52.109 00:22:52.109 --- 10.0.0.1 ping statistics --- 00:22:52.109 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.109 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=3061134 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 3061134 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3061134 ']' 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:52.109 04:34:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.109 [2024-11-05 04:34:05.652706] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:22:52.109 [2024-11-05 04:34:05.652781] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.370 [2024-11-05 04:34:05.750053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.370 [2024-11-05 04:34:05.790011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.370 [2024-11-05 04:34:05.790056] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.370 [2024-11-05 04:34:05.790063] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.370 [2024-11-05 04:34:05.790068] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.370 [2024-11-05 04:34:05.790073] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.370 [2024-11-05 04:34:05.791792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.370 [2024-11-05 04:34:05.791961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.370 [2024-11-05 04:34:05.792079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.370 [2024-11-05 04:34:05.792081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.940 [2024-11-05 04:34:06.501359] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.940 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.201 Malloc1 00:22:53.201 [2024-11-05 04:34:06.611711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.201 Malloc2 00:22:53.201 Malloc3 00:22:53.201 Malloc4 00:22:53.201 Malloc5 00:22:53.201 Malloc6 00:22:53.201 Malloc7 00:22:53.462 Malloc8 00:22:53.462 Malloc9 00:22:53.462 Malloc10 00:22:53.462 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:53.462 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:53.462 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.462 04:34:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3061360 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3061360 /var/tmp/bdevperf.sock 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 3061360 ']' 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:53.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.462 { 00:22:53.462 "params": { 00:22:53.462 "name": "Nvme$subsystem", 00:22:53.462 "trtype": "$TEST_TRANSPORT", 00:22:53.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.462 "adrfam": "ipv4", 00:22:53.462 "trsvcid": "$NVMF_PORT", 00:22:53.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.462 "hdgst": ${hdgst:-false}, 00:22:53.462 "ddgst": ${ddgst:-false} 00:22:53.462 }, 00:22:53.462 "method": "bdev_nvme_attach_controller" 00:22:53.462 } 00:22:53.462 EOF 00:22:53.462 )") 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.462 { 00:22:53.462 "params": { 00:22:53.462 "name": "Nvme$subsystem", 00:22:53.462 "trtype": "$TEST_TRANSPORT", 00:22:53.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.462 "adrfam": "ipv4", 00:22:53.462 "trsvcid": "$NVMF_PORT", 00:22:53.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.462 "hdgst": ${hdgst:-false}, 00:22:53.462 "ddgst": ${ddgst:-false} 00:22:53.462 }, 00:22:53.462 "method": "bdev_nvme_attach_controller" 00:22:53.462 } 00:22:53.462 EOF 00:22:53.462 )") 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.462 { 00:22:53.462 "params": { 00:22:53.462 "name": "Nvme$subsystem", 00:22:53.462 "trtype": "$TEST_TRANSPORT", 00:22:53.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.462 "adrfam": "ipv4", 00:22:53.462 "trsvcid": "$NVMF_PORT", 00:22:53.462 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.462 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.462 "hdgst": ${hdgst:-false}, 00:22:53.462 "ddgst": ${ddgst:-false} 00:22:53.462 }, 00:22:53.462 "method": "bdev_nvme_attach_controller" 00:22:53.462 } 00:22:53.462 EOF 00:22:53.462 )") 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.462 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.462 { 00:22:53.462 "params": { 00:22:53.462 "name": "Nvme$subsystem", 00:22:53.462 "trtype": "$TEST_TRANSPORT", 00:22:53.462 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.462 "adrfam": "ipv4", 00:22:53.462 "trsvcid": "$NVMF_PORT", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.463 "hdgst": ${hdgst:-false}, 00:22:53.463 "ddgst": ${ddgst:-false} 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 } 00:22:53.463 EOF 00:22:53.463 )") 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.463 { 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme$subsystem", 00:22:53.463 "trtype": "$TEST_TRANSPORT", 00:22:53.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "$NVMF_PORT", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.463 "hdgst": ${hdgst:-false}, 00:22:53.463 "ddgst": ${ddgst:-false} 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 } 00:22:53.463 EOF 00:22:53.463 )") 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.463 { 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme$subsystem", 00:22:53.463 "trtype": "$TEST_TRANSPORT", 00:22:53.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "$NVMF_PORT", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.463 "hdgst": ${hdgst:-false}, 00:22:53.463 "ddgst": ${ddgst:-false} 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 } 00:22:53.463 EOF 00:22:53.463 )") 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.463 [2024-11-05 04:34:07.055577] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:22:53.463 [2024-11-05 04:34:07.055631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3061360 ] 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.463 { 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme$subsystem", 00:22:53.463 "trtype": "$TEST_TRANSPORT", 00:22:53.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "$NVMF_PORT", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.463 "hdgst": ${hdgst:-false}, 00:22:53.463 "ddgst": ${ddgst:-false} 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 } 00:22:53.463 EOF 00:22:53.463 )") 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.463 { 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme$subsystem", 00:22:53.463 "trtype": "$TEST_TRANSPORT", 00:22:53.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "$NVMF_PORT", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.463 "hdgst": ${hdgst:-false}, 00:22:53.463 "ddgst": ${ddgst:-false} 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 } 00:22:53.463 EOF 00:22:53.463 )") 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.463 { 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme$subsystem", 00:22:53.463 "trtype": "$TEST_TRANSPORT", 00:22:53.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "$NVMF_PORT", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.463 "hdgst": ${hdgst:-false}, 00:22:53.463 "ddgst": ${ddgst:-false} 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 } 00:22:53.463 EOF 00:22:53.463 )") 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:53.463 { 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme$subsystem", 00:22:53.463 "trtype": "$TEST_TRANSPORT", 00:22:53.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "$NVMF_PORT", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.463 "hdgst": ${hdgst:-false}, 00:22:53.463 "ddgst": ${ddgst:-false} 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 } 00:22:53.463 EOF 00:22:53.463 )") 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:53.463 04:34:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme1", 00:22:53.463 "trtype": "tcp", 00:22:53.463 "traddr": "10.0.0.2", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "4420", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.463 "hdgst": false, 00:22:53.463 "ddgst": false 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 },{ 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme2", 00:22:53.463 "trtype": "tcp", 00:22:53.463 "traddr": "10.0.0.2", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "4420", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:53.463 "hdgst": false, 00:22:53.463 "ddgst": false 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 },{ 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme3", 00:22:53.463 "trtype": "tcp", 00:22:53.463 "traddr": "10.0.0.2", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "4420", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:53.463 "hdgst": false, 00:22:53.463 "ddgst": false 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 },{ 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme4", 00:22:53.463 "trtype": "tcp", 00:22:53.463 "traddr": "10.0.0.2", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "4420", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:53.463 "hdgst": false, 00:22:53.463 "ddgst": false 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 },{ 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme5", 00:22:53.463 "trtype": "tcp", 00:22:53.463 "traddr": "10.0.0.2", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "4420", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:53.463 "hdgst": false, 00:22:53.463 "ddgst": false 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 },{ 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme6", 00:22:53.463 "trtype": "tcp", 00:22:53.463 "traddr": "10.0.0.2", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "4420", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:53.463 "hdgst": false, 00:22:53.463 "ddgst": false 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 },{ 00:22:53.463 "params": { 00:22:53.463 "name": "Nvme7", 00:22:53.463 "trtype": "tcp", 00:22:53.463 "traddr": "10.0.0.2", 00:22:53.463 "adrfam": "ipv4", 00:22:53.463 "trsvcid": "4420", 00:22:53.463 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:53.463 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:53.463 "hdgst": false, 00:22:53.463 "ddgst": false 00:22:53.463 }, 00:22:53.463 "method": "bdev_nvme_attach_controller" 00:22:53.463 },{ 00:22:53.464 "params": { 00:22:53.464 "name": "Nvme8", 00:22:53.464 "trtype": "tcp", 00:22:53.464 "traddr": "10.0.0.2", 00:22:53.464 "adrfam": "ipv4", 00:22:53.464 "trsvcid": "4420", 00:22:53.464 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:53.464 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:53.464 "hdgst": false, 00:22:53.464 "ddgst": false 00:22:53.464 }, 00:22:53.464 "method": "bdev_nvme_attach_controller" 00:22:53.464 },{ 00:22:53.464 "params": { 00:22:53.464 "name": "Nvme9", 00:22:53.464 "trtype": "tcp", 00:22:53.464 "traddr": "10.0.0.2", 00:22:53.464 "adrfam": "ipv4", 00:22:53.464 "trsvcid": "4420", 00:22:53.464 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:53.464 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:53.464 "hdgst": false, 00:22:53.464 "ddgst": false 00:22:53.464 }, 00:22:53.464 "method": "bdev_nvme_attach_controller" 00:22:53.464 },{ 00:22:53.464 "params": { 00:22:53.464 "name": "Nvme10", 00:22:53.464 "trtype": "tcp", 00:22:53.464 "traddr": "10.0.0.2", 00:22:53.464 "adrfam": "ipv4", 00:22:53.464 "trsvcid": "4420", 00:22:53.464 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:53.464 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:53.464 "hdgst": false, 00:22:53.464 "ddgst": false 00:22:53.464 }, 00:22:53.464 "method": "bdev_nvme_attach_controller" 00:22:53.464 }' 00:22:53.724 [2024-11-05 04:34:07.127067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.724 [2024-11-05 04:34:07.163518] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:55.106 Running I/O for 10 seconds... 00:22:55.106 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:55.106 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:22:55.106 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:55.106 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.106 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:22:55.368 04:34:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:22:55.629 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3061134 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3061134 ']' 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3061134 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:55.890 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3061134 00:22:56.176 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:56.177 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:56.177 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3061134' 00:22:56.177 killing process with pid 3061134 00:22:56.177 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 3061134 00:22:56.177 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 3061134 00:22:56.177 [2024-11-05 04:34:09.557617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.557999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2b640 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558971] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.558995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.177 [2024-11-05 04:34:09.559062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2e0a0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559965] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.559996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.560231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bb10 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.178 [2024-11-05 04:34:09.561102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.561310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2bfe0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562073] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c4d0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.179 [2024-11-05 04:34:09.562986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.562991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.562995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2c9a0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.563839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.568228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb83c30 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.568411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70c990 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.568497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713fc0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.568581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb41370 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.568667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713470 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.568758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.180 [2024-11-05 04:34:09.568813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.568820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x715cb0 is same with the state(6) to be set 00:22:56.180 [2024-11-05 04:34:09.570394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.180 [2024-11-05 04:34:09.570667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.180 [2024-11-05 04:34:09.570674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.570985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.570994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.571990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.571997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.572006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.572014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.572025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.572032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.572042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.181 [2024-11-05 04:34:09.572049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.181 [2024-11-05 04:34:09.572059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.572401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.182 [2024-11-05 04:34:09.572411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.182 [2024-11-05 04:34:09.574419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574468] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.574645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2cd20 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.575358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d1f0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.575799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2d6e0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576337] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576513] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.182 [2024-11-05 04:34:09.576522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.183 [2024-11-05 04:34:09.576526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.183 [2024-11-05 04:34:09.576531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.183 [2024-11-05 04:34:09.576536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.183 [2024-11-05 04:34:09.576541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.183 [2024-11-05 04:34:09.576545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf2dbb0 is same with the state(6) to be set 00:22:56.183 [2024-11-05 04:34:09.585137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.585427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.585437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.586987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.586996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.183 [2024-11-05 04:34:09.587516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.183 [2024-11-05 04:34:09.587523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.587539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.587556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.587572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.587589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.587606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.587623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:56.184 [2024-11-05 04:34:09.587830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb83c30 (9): Bad file descriptor 00:22:56.184 [2024-11-05 04:34:09.587875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.587885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.587901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.587916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.587932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62d610 is same with the state(6) to be set 00:22:56.184 [2024-11-05 04:34:09.587966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.587976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.587991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.587999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb527b0 is same with the state(6) to be set 00:22:56.184 [2024-11-05 04:34:09.588048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85490 is same with the state(6) to be set 00:22:56.184 [2024-11-05 04:34:09.588138] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:56.184 [2024-11-05 04:34:09.588191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.588198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53110 is same with the state(6) to be set 00:22:56.184 [2024-11-05 04:34:09.588215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70c990 (9): Bad file descriptor 00:22:56.184 [2024-11-05 04:34:09.588229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x713fc0 (9): Bad file descriptor 00:22:56.184 [2024-11-05 04:34:09.588244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb41370 (9): Bad file descriptor 00:22:56.184 [2024-11-05 04:34:09.588261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x713470 (9): Bad file descriptor 00:22:56.184 [2024-11-05 04:34:09.588276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x715cb0 (9): Bad file descriptor 00:22:56.184 [2024-11-05 04:34:09.590970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.590995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.184 [2024-11-05 04:34:09.591744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.184 [2024-11-05 04:34:09.591757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.591990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.591999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.592006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.592015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.592023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.592032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.592039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.592049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.592056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.592065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.592072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.593381] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:56.185 [2024-11-05 04:34:09.593406] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:56.185 [2024-11-05 04:34:09.594918] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:56.185 [2024-11-05 04:34:09.594951] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:56.185 [2024-11-05 04:34:09.594970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb527b0 (9): Bad file descriptor 00:22:56.185 [2024-11-05 04:34:09.595248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.185 [2024-11-05 04:34:09.595264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x715cb0 with addr=10.0.0.2, port=4420 00:22:56.185 [2024-11-05 04:34:09.595274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x715cb0 is same with the state(6) to be set 00:22:56.185 [2024-11-05 04:34:09.595512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.185 [2024-11-05 04:34:09.595522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x70c990 with addr=10.0.0.2, port=4420 00:22:56.185 [2024-11-05 04:34:09.595529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70c990 is same with the state(6) to be set 00:22:56.185 [2024-11-05 04:34:09.596107] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:56.185 [2024-11-05 04:34:09.596146] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:56.185 [2024-11-05 04:34:09.596182] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:56.185 [2024-11-05 04:34:09.596692] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:56.185 [2024-11-05 04:34:09.596714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53110 (9): Bad file descriptor 00:22:56.185 [2024-11-05 04:34:09.596733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x715cb0 (9): Bad file descriptor 00:22:56.185 [2024-11-05 04:34:09.596743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70c990 (9): Bad file descriptor 00:22:56.185 [2024-11-05 04:34:09.596786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.596987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.596996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.185 [2024-11-05 04:34:09.597570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.185 [2024-11-05 04:34:09.597580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.597857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.597866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbab30 is same with the state(6) to be set 00:22:56.186 [2024-11-05 04:34:09.597970] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:56.186 [2024-11-05 04:34:09.598286] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:56.186 [2024-11-05 04:34:09.598660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.186 [2024-11-05 04:34:09.598673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb527b0 with addr=10.0.0.2, port=4420 00:22:56.186 [2024-11-05 04:34:09.598682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb527b0 is same with the state(6) to be set 00:22:56.186 [2024-11-05 04:34:09.598700] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:56.186 [2024-11-05 04:34:09.598708] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:56.186 [2024-11-05 04:34:09.598717] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:56.186 [2024-11-05 04:34:09.598730] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:56.186 [2024-11-05 04:34:09.598737] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:56.186 [2024-11-05 04:34:09.598760] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:56.186 [2024-11-05 04:34:09.598792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62d610 (9): Bad file descriptor 00:22:56.186 [2024-11-05 04:34:09.598812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb85490 (9): Bad file descriptor 00:22:56.186 [2024-11-05 04:34:09.598845] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:56.186 [2024-11-05 04:34:09.600132] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:56.186 [2024-11-05 04:34:09.600146] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:56.186 [2024-11-05 04:34:09.600168] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:56.186 [2024-11-05 04:34:09.600379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.186 [2024-11-05 04:34:09.600393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb53110 with addr=10.0.0.2, port=4420 00:22:56.186 [2024-11-05 04:34:09.600402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53110 is same with the state(6) to be set 00:22:56.186 [2024-11-05 04:34:09.600413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb527b0 (9): Bad file descriptor 00:22:56.186 [2024-11-05 04:34:09.600461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.600982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.600992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.186 [2024-11-05 04:34:09.601290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.186 [2024-11-05 04:34:09.601299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.601543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.601552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbc040 is same with the state(6) to be set 00:22:56.187 [2024-11-05 04:34:09.602822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.602836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.602849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.602858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.602870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.602879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.602891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.602901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.602913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.602922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.602933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.602942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.602959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.602968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.602978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.602985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.602995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.187 [2024-11-05 04:34:09.603730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.187 [2024-11-05 04:34:09.603739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.603938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.603946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbbd5c0 is same with the state(6) to be set 00:22:56.188 [2024-11-05 04:34:09.605244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.605988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.605997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.188 [2024-11-05 04:34:09.606362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.188 [2024-11-05 04:34:09.606371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1c9c0 is same with the state(6) to be set 00:22:56.189 [2024-11-05 04:34:09.607616] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:56.189 [2024-11-05 04:34:09.607632] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:56.189 [2024-11-05 04:34:09.607643] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:56.189 [2024-11-05 04:34:09.608106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.189 [2024-11-05 04:34:09.608147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x713470 with addr=10.0.0.2, port=4420 00:22:56.189 [2024-11-05 04:34:09.608160] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713470 is same with the state(6) to be set 00:22:56.189 [2024-11-05 04:34:09.608183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53110 (9): Bad file descriptor 00:22:56.189 [2024-11-05 04:34:09.608194] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:56.189 [2024-11-05 04:34:09.608201] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:56.189 [2024-11-05 04:34:09.608211] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:56.189 [2024-11-05 04:34:09.608263] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:22:56.189 [2024-11-05 04:34:09.608594] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:56.189 [2024-11-05 04:34:09.608948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.189 [2024-11-05 04:34:09.608962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb41370 with addr=10.0.0.2, port=4420 00:22:56.189 [2024-11-05 04:34:09.608970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb41370 is same with the state(6) to be set 00:22:56.189 [2024-11-05 04:34:09.609321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.189 [2024-11-05 04:34:09.609331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x713fc0 with addr=10.0.0.2, port=4420 00:22:56.189 [2024-11-05 04:34:09.609338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713fc0 is same with the state(6) to be set 00:22:56.189 [2024-11-05 04:34:09.609656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.189 [2024-11-05 04:34:09.609666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb83c30 with addr=10.0.0.2, port=4420 00:22:56.189 [2024-11-05 04:34:09.609673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb83c30 is same with the state(6) to be set 00:22:56.189 [2024-11-05 04:34:09.609683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x713470 (9): Bad file descriptor 00:22:56.189 [2024-11-05 04:34:09.609692] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:56.189 [2024-11-05 04:34:09.609699] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:56.189 [2024-11-05 04:34:09.609707] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:56.189 [2024-11-05 04:34:09.610552] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:56.189 [2024-11-05 04:34:09.610570] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:56.189 [2024-11-05 04:34:09.610580] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:56.189 [2024-11-05 04:34:09.610610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb41370 (9): Bad file descriptor 00:22:56.189 [2024-11-05 04:34:09.610620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x713fc0 (9): Bad file descriptor 00:22:56.189 [2024-11-05 04:34:09.610631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb83c30 (9): Bad file descriptor 00:22:56.189 [2024-11-05 04:34:09.610639] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:56.189 [2024-11-05 04:34:09.610646] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:56.189 [2024-11-05 04:34:09.610653] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:56.189 [2024-11-05 04:34:09.610670] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:22:56.189 [2024-11-05 04:34:09.610732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.610988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.610996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.611629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.611638] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb17500 is same with the state(6) to be set 00:22:56.189 [2024-11-05 04:34:09.612881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.612897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.612910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.612920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.189 [2024-11-05 04:34:09.612931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.189 [2024-11-05 04:34:09.612940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.612952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.612961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.612972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.612981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.612991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.613990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.613998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.614007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.190 [2024-11-05 04:34:09.614014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.190 [2024-11-05 04:34:09.614023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1b440 is same with the state(6) to be set 00:22:56.190 [2024-11-05 04:34:09.615556] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:56.190 [2024-11-05 04:34:09.615580] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:56.190 task offset: 24576 on job bdev=Nvme1n1 fails 00:22:56.190 00:22:56.190 Latency(us) 00:22:56.190 [2024-11-05T03:34:09.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:56.190 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.190 Job: Nvme1n1 ended in about 0.96 seconds with error 00:22:56.190 Verification LBA range: start 0x0 length 0x400 00:22:56.190 Nvme1n1 : 0.96 201.04 12.56 67.01 0.00 236048.64 19660.80 246415.36 00:22:56.190 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.190 Job: Nvme2n1 ended in about 0.96 seconds with error 00:22:56.190 Verification LBA range: start 0x0 length 0x400 00:22:56.190 Nvme2n1 : 0.96 200.79 12.55 66.93 0.00 231499.52 18896.21 222822.40 00:22:56.190 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.190 Job: Nvme3n1 ended in about 0.97 seconds with error 00:22:56.190 Verification LBA range: start 0x0 length 0x400 00:22:56.190 Nvme3n1 : 0.97 137.74 8.61 66.28 0.00 297673.99 20643.84 251658.24 00:22:56.190 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.190 Job: Nvme4n1 ended in about 0.97 seconds with error 00:22:56.190 Verification LBA range: start 0x0 length 0x400 00:22:56.190 Nvme4n1 : 0.97 198.29 12.39 66.10 0.00 224859.52 16930.13 230686.72 00:22:56.190 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.190 Job: Nvme5n1 ended in about 0.97 seconds with error 00:22:56.190 Verification LBA range: start 0x0 length 0x400 00:22:56.190 Nvme5n1 : 0.97 197.81 12.36 65.94 0.00 220611.41 16820.91 249910.61 00:22:56.190 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.190 Job: Nvme6n1 ended in about 0.98 seconds with error 00:22:56.190 Verification LBA range: start 0x0 length 0x400 00:22:56.190 Nvme6n1 : 0.98 148.22 9.26 53.15 0.00 282040.72 17476.27 260396.37 00:22:56.190 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.190 Job: Nvme7n1 ended in about 0.96 seconds with error 00:22:56.190 Verification LBA range: start 0x0 length 0x400 00:22:56.190 Nvme7n1 : 0.96 199.93 12.50 66.64 0.00 208362.24 7755.09 263891.63 00:22:56.190 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.190 Job: Nvme8n1 ended in about 0.96 seconds with error 00:22:56.190 Verification LBA range: start 0x0 length 0x400 00:22:56.190 Nvme8n1 : 0.96 200.25 12.52 66.75 0.00 203113.81 21189.97 251658.24 00:22:56.191 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.191 Job: Nvme9n1 ended in about 0.98 seconds with error 00:22:56.191 Verification LBA range: start 0x0 length 0x400 00:22:56.191 Nvme9n1 : 0.98 130.52 8.16 65.26 0.00 271965.01 17367.04 249910.61 00:22:56.191 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:56.191 Job: Nvme10n1 ended in about 0.97 seconds with error 00:22:56.191 Verification LBA range: start 0x0 length 0x400 00:22:56.191 Nvme10n1 : 0.97 131.54 8.22 65.77 0.00 263015.82 14417.92 267386.88 00:22:56.191 [2024-11-05T03:34:09.831Z] =================================================================================================================== 00:22:56.191 [2024-11-05T03:34:09.831Z] Total : 1746.12 109.13 649.83 0.00 240272.63 7755.09 267386.88 00:22:56.191 [2024-11-05 04:34:09.640777] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:56.191 [2024-11-05 04:34:09.640823] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:56.191 [2024-11-05 04:34:09.641099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.641119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x70c990 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.641129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x70c990 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.641307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.641320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x715cb0 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.641328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x715cb0 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.641337] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.641344] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.641353] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.641368] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.641375] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.641382] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.641394] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.641400] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.641407] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.641506] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:56.191 [2024-11-05 04:34:09.641531] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.641540] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.641548] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.641871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.641885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x62d610 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.641897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x62d610 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.642197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.642207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb85490 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.642215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb85490 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.642227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70c990 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.642240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x715cb0 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.642296] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:22:56.191 [2024-11-05 04:34:09.642309] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:22:56.191 [2024-11-05 04:34:09.642959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.642975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb53110 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.642983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb53110 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.642993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x62d610 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.643002] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb85490 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.643011] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.643017] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.643025] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.643035] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.643042] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.643049] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.643415] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:56.191 [2024-11-05 04:34:09.643432] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:56.191 [2024-11-05 04:34:09.643441] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:56.191 [2024-11-05 04:34:09.643450] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:56.191 [2024-11-05 04:34:09.643459] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:56.191 [2024-11-05 04:34:09.643469] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.643477] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.643516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb53110 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.643526] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.643536] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.643544] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.643554] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.643561] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.643568] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.643611] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.643620] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.643965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.643977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb527b0 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.643985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb527b0 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.644336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.644346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb83c30 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.644354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb83c30 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.644700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.644709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x713fc0 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.644717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713fc0 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.645070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.645080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb41370 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.645087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb41370 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.645307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.191 [2024-11-05 04:34:09.645317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x713470 with addr=10.0.0.2, port=4420 00:22:56.191 [2024-11-05 04:34:09.645325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x713470 is same with the state(6) to be set 00:22:56.191 [2024-11-05 04:34:09.645332] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.645339] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.645346] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.645377] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.645388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb527b0 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.645397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb83c30 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.645407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x713fc0 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.645416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb41370 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.645428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x713470 (9): Bad file descriptor 00:22:56.191 [2024-11-05 04:34:09.645457] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.645465] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.645472] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.645483] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.645490] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.645497] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.645507] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.645514] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.645521] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.645531] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.645538] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.645545] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.645557] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:56.191 [2024-11-05 04:34:09.645564] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:56.191 [2024-11-05 04:34:09.645571] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:56.191 [2024-11-05 04:34:09.645600] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.645609] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.645616] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.645624] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:56.191 [2024-11-05 04:34:09.645631] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:56.451 04:34:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3061360 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3061360 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 3061360 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:57.393 rmmod nvme_tcp 00:22:57.393 rmmod nvme_fabrics 00:22:57.393 rmmod nvme_keyring 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 3061134 ']' 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 3061134 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 3061134 ']' 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 3061134 00:22:57.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3061134) - No such process 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3061134 is not found' 00:22:57.393 Process with pid 3061134 is not found 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.393 04:34:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:59.937 00:22:59.937 real 0m7.799s 00:22:59.937 user 0m19.045s 00:22:59.937 sys 0m1.237s 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:59.937 ************************************ 00:22:59.937 END TEST nvmf_shutdown_tc3 00:22:59.937 ************************************ 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:59.937 ************************************ 00:22:59.937 START TEST nvmf_shutdown_tc4 00:22:59.937 ************************************ 00:22:59.937 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:59.938 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:59.938 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:59.938 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:59.938 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:59.938 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:59.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.672 ms 00:22:59.939 00:22:59.939 --- 10.0.0.2 ping statistics --- 00:22:59.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.939 rtt min/avg/max/mdev = 0.672/0.672/0.672/0.000 ms 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:22:59.939 00:22:59.939 --- 10.0.0.1 ping statistics --- 00:22:59.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.939 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=3062784 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 3062784 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 3062784 ']' 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:59.939 04:34:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:59.939 [2024-11-05 04:34:13.536281] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:22:59.939 [2024-11-05 04:34:13.536366] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.199 [2024-11-05 04:34:13.631299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:00.199 [2024-11-05 04:34:13.665309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.199 [2024-11-05 04:34:13.665342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.199 [2024-11-05 04:34:13.665349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.199 [2024-11-05 04:34:13.665357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.199 [2024-11-05 04:34:13.665362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.199 [2024-11-05 04:34:13.666937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.199 [2024-11-05 04:34:13.667096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.200 [2024-11-05 04:34:13.667209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.200 [2024-11-05 04:34:13.667212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.770 [2024-11-05 04:34:14.365866] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:00.770 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.030 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:01.030 Malloc1 00:23:01.030 [2024-11-05 04:34:14.477514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.030 Malloc2 00:23:01.030 Malloc3 00:23:01.030 Malloc4 00:23:01.030 Malloc5 00:23:01.030 Malloc6 00:23:01.291 Malloc7 00:23:01.291 Malloc8 00:23:01.291 Malloc9 00:23:01.291 Malloc10 00:23:01.291 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.291 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:01.291 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:01.291 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:01.291 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3063161 00:23:01.291 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:23:01.291 04:34:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:23:01.550 [2024-11-05 04:34:14.935921] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3062784 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3062784 ']' 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3062784 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3062784 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3062784' 00:23:06.844 killing process with pid 3062784 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 3062784 00:23:06.844 04:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 3062784 00:23:06.844 [2024-11-05 04:34:19.951744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.844 [2024-11-05 04:34:19.951864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bb70 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189c550 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189c550 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189c550 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189c550 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189c550 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189c550 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189c550 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b6a0 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b6a0 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b6a0 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b6a0 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b6a0 is same with the state(6) to be set 00:23:06.845 [2024-11-05 04:34:19.952539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189b6a0 is same with the state(6) to be set 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 [2024-11-05 04:34:19.954537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.845 starting I/O failed: -6 00:23:06.845 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 [2024-11-05 04:34:19.956031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.846 NVMe io qpair process completion error 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 [2024-11-05 04:34:19.957169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.846 starting I/O failed: -6 00:23:06.846 starting I/O failed: -6 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.846 Write completed with error (sct=0, sc=8) 00:23:06.846 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 [2024-11-05 04:34:19.958136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 [2024-11-05 04:34:19.959193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 [2024-11-05 04:34:19.960626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.847 NVMe io qpair process completion error 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 starting I/O failed: -6 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.847 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 [2024-11-05 04:34:19.961668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 [2024-11-05 04:34:19.962484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 [2024-11-05 04:34:19.963429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.848 Write completed with error (sct=0, sc=8) 00:23:06.848 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 [2024-11-05 04:34:19.965053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.849 NVMe io qpair process completion error 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 [2024-11-05 04:34:19.966416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 starting I/O failed: -6 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.849 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 [2024-11-05 04:34:19.967273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 [2024-11-05 04:34:19.968234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.850 Write completed with error (sct=0, sc=8) 00:23:06.850 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 [2024-11-05 04:34:19.970435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.851 NVMe io qpair process completion error 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 [2024-11-05 04:34:19.971654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 [2024-11-05 04:34:19.972473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.851 Write completed with error (sct=0, sc=8) 00:23:06.851 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 [2024-11-05 04:34:19.974920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.852 NVMe io qpair process completion error 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 [2024-11-05 04:34:19.977338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 starting I/O failed: -6 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.852 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 [2024-11-05 04:34:19.978259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 [2024-11-05 04:34:19.979674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.853 NVMe io qpair process completion error 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 starting I/O failed: -6 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.853 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 [2024-11-05 04:34:19.980948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 [2024-11-05 04:34:19.981760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.854 starting I/O failed: -6 00:23:06.854 starting I/O failed: -6 00:23:06.854 starting I/O failed: -6 00:23:06.854 starting I/O failed: -6 00:23:06.854 starting I/O failed: -6 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 [2024-11-05 04:34:19.982926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.854 starting I/O failed: -6 00:23:06.854 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 [2024-11-05 04:34:19.985163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.855 NVMe io qpair process completion error 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 [2024-11-05 04:34:19.986385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 starting I/O failed: -6 00:23:06.855 [2024-11-05 04:34:19.987207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.855 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 [2024-11-05 04:34:19.988139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 [2024-11-05 04:34:19.989825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.856 NVMe io qpair process completion error 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 Write completed with error (sct=0, sc=8) 00:23:06.856 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 [2024-11-05 04:34:19.991215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 [2024-11-05 04:34:19.992044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 [2024-11-05 04:34:19.992954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.857 starting I/O failed: -6 00:23:06.857 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 [2024-11-05 04:34:19.996179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.858 NVMe io qpair process completion error 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 [2024-11-05 04:34:19.997214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 [2024-11-05 04:34:19.998048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 Write completed with error (sct=0, sc=8) 00:23:06.858 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 [2024-11-05 04:34:19.998984] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.859 starting I/O failed: -6 00:23:06.859 Write completed with error (sct=0, sc=8) 00:23:06.860 starting I/O failed: -6 00:23:06.860 Write completed with error (sct=0, sc=8) 00:23:06.860 starting I/O failed: -6 00:23:06.860 Write completed with error (sct=0, sc=8) 00:23:06.860 starting I/O failed: -6 00:23:06.860 [2024-11-05 04:34:20.000681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:23:06.860 NVMe io qpair process completion error 00:23:06.860 Initializing NVMe Controllers 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:23:06.860 Controller IO queue size 128, less than required. 00:23:06.860 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:23:06.860 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:23:06.860 Initialization complete. Launching workers. 00:23:06.860 ======================================================== 00:23:06.860 Latency(us) 00:23:06.860 Device Information : IOPS MiB/s Average min max 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1929.16 82.89 66365.15 834.87 119557.74 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1873.62 80.51 67661.17 796.05 151548.12 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1895.67 81.45 66892.23 652.64 119618.70 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1892.96 81.34 67005.10 687.17 120721.13 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1851.78 79.57 68520.95 562.56 121475.62 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1875.28 80.58 67690.42 835.50 121755.14 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1854.27 79.68 68482.80 581.20 119185.57 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1855.73 79.74 68451.42 747.82 127403.05 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1875.07 80.57 67776.28 824.16 122683.59 00:23:06.860 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1873.20 80.49 67869.07 845.56 132206.70 00:23:06.860 ======================================================== 00:23:06.860 Total : 18776.73 806.81 67663.43 562.56 151548.12 00:23:06.860 00:23:06.860 [2024-11-05 04:34:20.003896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0da70 is same with the state(6) to be set 00:23:06.860 [2024-11-05 04:34:20.003952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0e720 is same with the state(6) to be set 00:23:06.860 [2024-11-05 04:34:20.003984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0c560 is same with the state(6) to be set 00:23:06.860 [2024-11-05 04:34:20.004013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0e900 is same with the state(6) to be set 00:23:06.860 [2024-11-05 04:34:20.004042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0eae0 is same with the state(6) to be set 00:23:06.860 [2024-11-05 04:34:20.004071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d410 is same with the state(6) to be set 00:23:06.860 [2024-11-05 04:34:20.004099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0cbc0 is same with the state(6) to be set 00:23:06.860 [2024-11-05 04:34:20.004131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0d740 is same with the state(6) to be set 00:23:06.860 [2024-11-05 04:34:20.004161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0cef0 is same with the state(6) to be set 00:23:06.860 [2024-11-05 04:34:20.004190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e0c890 is same with the state(6) to be set 00:23:06.860 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:23:06.860 04:34:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3063161 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3063161 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 3063161 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:07.803 rmmod nvme_tcp 00:23:07.803 rmmod nvme_fabrics 00:23:07.803 rmmod nvme_keyring 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 3062784 ']' 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 3062784 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 3062784 ']' 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 3062784 00:23:07.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3062784) - No such process 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 3062784 is not found' 00:23:07.803 Process with pid 3062784 is not found 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.803 04:34:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.723 04:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:09.723 00:23:09.723 real 0m10.266s 00:23:09.723 user 0m27.974s 00:23:09.723 sys 0m3.905s 00:23:09.723 04:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:09.723 04:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:23:09.723 ************************************ 00:23:09.723 END TEST nvmf_shutdown_tc4 00:23:09.723 ************************************ 00:23:09.983 04:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:23:09.983 00:23:09.983 real 0m43.437s 00:23:09.983 user 1m46.150s 00:23:09.983 sys 0m13.479s 00:23:09.983 04:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:09.983 04:34:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:09.983 ************************************ 00:23:09.983 END TEST nvmf_shutdown 00:23:09.983 ************************************ 00:23:09.983 04:34:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:23:09.983 00:23:09.983 real 12m45.912s 00:23:09.983 user 27m5.485s 00:23:09.983 sys 3m43.970s 00:23:09.983 04:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:09.983 04:34:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:09.983 ************************************ 00:23:09.983 END TEST nvmf_target_extra 00:23:09.983 ************************************ 00:23:09.983 04:34:23 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:09.984 04:34:23 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:09.984 04:34:23 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:09.984 04:34:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:09.984 ************************************ 00:23:09.984 START TEST nvmf_host 00:23:09.984 ************************************ 00:23:09.984 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:09.984 * Looking for test storage... 00:23:09.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:23:10.245 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:10.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.246 --rc genhtml_branch_coverage=1 00:23:10.246 --rc genhtml_function_coverage=1 00:23:10.246 --rc genhtml_legend=1 00:23:10.246 --rc geninfo_all_blocks=1 00:23:10.246 --rc geninfo_unexecuted_blocks=1 00:23:10.246 00:23:10.246 ' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:10.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.246 --rc genhtml_branch_coverage=1 00:23:10.246 --rc genhtml_function_coverage=1 00:23:10.246 --rc genhtml_legend=1 00:23:10.246 --rc geninfo_all_blocks=1 00:23:10.246 --rc geninfo_unexecuted_blocks=1 00:23:10.246 00:23:10.246 ' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:10.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.246 --rc genhtml_branch_coverage=1 00:23:10.246 --rc genhtml_function_coverage=1 00:23:10.246 --rc genhtml_legend=1 00:23:10.246 --rc geninfo_all_blocks=1 00:23:10.246 --rc geninfo_unexecuted_blocks=1 00:23:10.246 00:23:10.246 ' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:10.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.246 --rc genhtml_branch_coverage=1 00:23:10.246 --rc genhtml_function_coverage=1 00:23:10.246 --rc genhtml_legend=1 00:23:10.246 --rc geninfo_all_blocks=1 00:23:10.246 --rc geninfo_unexecuted_blocks=1 00:23:10.246 00:23:10.246 ' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:10.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.246 ************************************ 00:23:10.246 START TEST nvmf_multicontroller 00:23:10.246 ************************************ 00:23:10.246 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:10.246 * Looking for test storage... 00:23:10.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:23:10.508 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:10.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.509 --rc genhtml_branch_coverage=1 00:23:10.509 --rc genhtml_function_coverage=1 00:23:10.509 --rc genhtml_legend=1 00:23:10.509 --rc geninfo_all_blocks=1 00:23:10.509 --rc geninfo_unexecuted_blocks=1 00:23:10.509 00:23:10.509 ' 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:10.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.509 --rc genhtml_branch_coverage=1 00:23:10.509 --rc genhtml_function_coverage=1 00:23:10.509 --rc genhtml_legend=1 00:23:10.509 --rc geninfo_all_blocks=1 00:23:10.509 --rc geninfo_unexecuted_blocks=1 00:23:10.509 00:23:10.509 ' 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:10.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.509 --rc genhtml_branch_coverage=1 00:23:10.509 --rc genhtml_function_coverage=1 00:23:10.509 --rc genhtml_legend=1 00:23:10.509 --rc geninfo_all_blocks=1 00:23:10.509 --rc geninfo_unexecuted_blocks=1 00:23:10.509 00:23:10.509 ' 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:10.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.509 --rc genhtml_branch_coverage=1 00:23:10.509 --rc genhtml_function_coverage=1 00:23:10.509 --rc genhtml_legend=1 00:23:10.509 --rc geninfo_all_blocks=1 00:23:10.509 --rc geninfo_unexecuted_blocks=1 00:23:10.509 00:23:10.509 ' 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:10.509 04:34:23 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:10.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:23:10.509 04:34:24 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:18.655 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:18.655 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:18.655 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:18.655 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:18.655 04:34:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:18.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:18.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:23:18.656 00:23:18.656 --- 10.0.0.2 ping statistics --- 00:23:18.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.656 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:18.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:18.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:23:18.656 00:23:18.656 --- 10.0.0.1 ping statistics --- 00:23:18.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:18.656 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=3068574 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 3068574 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3068574 ']' 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:18.656 04:34:31 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 [2024-11-05 04:34:31.303652] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:23:18.656 [2024-11-05 04:34:31.303718] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.656 [2024-11-05 04:34:31.403632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:18.656 [2024-11-05 04:34:31.455764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.656 [2024-11-05 04:34:31.455818] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.656 [2024-11-05 04:34:31.455827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.656 [2024-11-05 04:34:31.455834] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.656 [2024-11-05 04:34:31.455840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.656 [2024-11-05 04:34:31.457599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.656 [2024-11-05 04:34:31.457790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.656 [2024-11-05 04:34:31.457795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 [2024-11-05 04:34:32.147651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 Malloc0 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 [2024-11-05 04:34:32.226417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 [2024-11-05 04:34:32.238358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 Malloc1 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.656 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.917 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.917 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:18.917 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.917 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3068925 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3068925 /var/tmp/bdevperf.sock 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 3068925 ']' 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:18.918 04:34:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.860 NVMe0n1 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.860 1 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.860 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.860 request: 00:23:19.860 { 00:23:19.860 "name": "NVMe0", 00:23:19.860 "trtype": "tcp", 00:23:19.860 "traddr": "10.0.0.2", 00:23:19.860 "adrfam": "ipv4", 00:23:19.860 "trsvcid": "4420", 00:23:19.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.860 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:19.860 "hostaddr": "10.0.0.1", 00:23:19.860 "prchk_reftag": false, 00:23:19.860 "prchk_guard": false, 00:23:19.860 "hdgst": false, 00:23:19.860 "ddgst": false, 00:23:19.860 "allow_unrecognized_csi": false, 00:23:19.860 "method": "bdev_nvme_attach_controller", 00:23:19.860 "req_id": 1 00:23:19.860 } 00:23:19.860 Got JSON-RPC error response 00:23:19.860 response: 00:23:19.860 { 00:23:19.861 "code": -114, 00:23:19.861 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:19.861 } 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.861 request: 00:23:19.861 { 00:23:19.861 "name": "NVMe0", 00:23:19.861 "trtype": "tcp", 00:23:19.861 "traddr": "10.0.0.2", 00:23:19.861 "adrfam": "ipv4", 00:23:19.861 "trsvcid": "4420", 00:23:19.861 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:19.861 "hostaddr": "10.0.0.1", 00:23:19.861 "prchk_reftag": false, 00:23:19.861 "prchk_guard": false, 00:23:19.861 "hdgst": false, 00:23:19.861 "ddgst": false, 00:23:19.861 "allow_unrecognized_csi": false, 00:23:19.861 "method": "bdev_nvme_attach_controller", 00:23:19.861 "req_id": 1 00:23:19.861 } 00:23:19.861 Got JSON-RPC error response 00:23:19.861 response: 00:23:19.861 { 00:23:19.861 "code": -114, 00:23:19.861 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:19.861 } 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.861 request: 00:23:19.861 { 00:23:19.861 "name": "NVMe0", 00:23:19.861 "trtype": "tcp", 00:23:19.861 "traddr": "10.0.0.2", 00:23:19.861 "adrfam": "ipv4", 00:23:19.861 "trsvcid": "4420", 00:23:19.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.861 "hostaddr": "10.0.0.1", 00:23:19.861 "prchk_reftag": false, 00:23:19.861 "prchk_guard": false, 00:23:19.861 "hdgst": false, 00:23:19.861 "ddgst": false, 00:23:19.861 "multipath": "disable", 00:23:19.861 "allow_unrecognized_csi": false, 00:23:19.861 "method": "bdev_nvme_attach_controller", 00:23:19.861 "req_id": 1 00:23:19.861 } 00:23:19.861 Got JSON-RPC error response 00:23:19.861 response: 00:23:19.861 { 00:23:19.861 "code": -114, 00:23:19.861 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:23:19.861 } 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:19.861 request: 00:23:19.861 { 00:23:19.861 "name": "NVMe0", 00:23:19.861 "trtype": "tcp", 00:23:19.861 "traddr": "10.0.0.2", 00:23:19.861 "adrfam": "ipv4", 00:23:19.861 "trsvcid": "4420", 00:23:19.861 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.861 "hostaddr": "10.0.0.1", 00:23:19.861 "prchk_reftag": false, 00:23:19.861 "prchk_guard": false, 00:23:19.861 "hdgst": false, 00:23:19.861 "ddgst": false, 00:23:19.861 "multipath": "failover", 00:23:19.861 "allow_unrecognized_csi": false, 00:23:19.861 "method": "bdev_nvme_attach_controller", 00:23:19.861 "req_id": 1 00:23:19.861 } 00:23:19.861 Got JSON-RPC error response 00:23:19.861 response: 00:23:19.861 { 00:23:19.861 "code": -114, 00:23:19.861 "message": "A controller named NVMe0 already exists with the specified network path" 00:23:19.861 } 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.861 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.123 NVMe0n1 00:23:20.123 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.123 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:20.123 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.123 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.123 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.123 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:23:20.123 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.123 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.384 00:23:20.384 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.384 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:20.384 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:20.384 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.384 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:20.384 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.384 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:20.384 04:34:33 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:21.325 { 00:23:21.325 "results": [ 00:23:21.325 { 00:23:21.325 "job": "NVMe0n1", 00:23:21.325 "core_mask": "0x1", 00:23:21.325 "workload": "write", 00:23:21.325 "status": "finished", 00:23:21.325 "queue_depth": 128, 00:23:21.325 "io_size": 4096, 00:23:21.325 "runtime": 1.007588, 00:23:21.325 "iops": 23819.259459223413, 00:23:21.325 "mibps": 93.04398226259146, 00:23:21.325 "io_failed": 0, 00:23:21.325 "io_timeout": 0, 00:23:21.325 "avg_latency_us": 5364.627342222222, 00:23:21.325 "min_latency_us": 2075.306666666667, 00:23:21.325 "max_latency_us": 10977.28 00:23:21.325 } 00:23:21.325 ], 00:23:21.325 "core_count": 1 00:23:21.325 } 00:23:21.584 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:21.584 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.584 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.584 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.584 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:23:21.584 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 3068925 00:23:21.585 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3068925 ']' 00:23:21.585 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3068925 00:23:21.585 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:21.585 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:21.585 04:34:34 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3068925 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3068925' 00:23:21.585 killing process with pid 3068925 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3068925 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3068925 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:23:21.585 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:21.585 [2024-11-05 04:34:32.365133] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:23:21.585 [2024-11-05 04:34:32.365214] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3068925 ] 00:23:21.585 [2024-11-05 04:34:32.438432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.585 [2024-11-05 04:34:32.474498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.585 [2024-11-05 04:34:33.821294] bdev.c:4691:bdev_name_add: *ERROR*: Bdev name a594412c-06c6-40d8-a51b-28dd5aef26c5 already exists 00:23:21.585 [2024-11-05 04:34:33.821325] bdev.c:7836:bdev_register: *ERROR*: Unable to add uuid:a594412c-06c6-40d8-a51b-28dd5aef26c5 alias for bdev NVMe1n1 00:23:21.585 [2024-11-05 04:34:33.821335] bdev_nvme.c:4604:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:21.585 Running I/O for 1 seconds... 00:23:21.585 23761.00 IOPS, 92.82 MiB/s 00:23:21.585 Latency(us) 00:23:21.585 [2024-11-05T03:34:35.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.585 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:21.585 NVMe0n1 : 1.01 23819.26 93.04 0.00 0.00 5364.63 2075.31 10977.28 00:23:21.585 [2024-11-05T03:34:35.225Z] =================================================================================================================== 00:23:21.585 [2024-11-05T03:34:35.225Z] Total : 23819.26 93.04 0.00 0.00 5364.63 2075.31 10977.28 00:23:21.585 Received shutdown signal, test time was about 1.000000 seconds 00:23:21.585 00:23:21.585 Latency(us) 00:23:21.585 [2024-11-05T03:34:35.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:21.585 [2024-11-05T03:34:35.225Z] =================================================================================================================== 00:23:21.585 [2024-11-05T03:34:35.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:21.585 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:21.585 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:21.585 rmmod nvme_tcp 00:23:21.845 rmmod nvme_fabrics 00:23:21.845 rmmod nvme_keyring 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 3068574 ']' 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 3068574 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 3068574 ']' 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 3068574 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3068574 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3068574' 00:23:21.845 killing process with pid 3068574 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 3068574 00:23:21.845 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 3068574 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.106 04:34:35 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.105 04:34:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:24.105 00:23:24.105 real 0m13.790s 00:23:24.105 user 0m17.417s 00:23:24.105 sys 0m6.201s 00:23:24.105 04:34:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:24.105 04:34:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.105 ************************************ 00:23:24.105 END TEST nvmf_multicontroller 00:23:24.105 ************************************ 00:23:24.105 04:34:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:24.105 04:34:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:24.105 04:34:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:24.105 04:34:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.105 ************************************ 00:23:24.105 START TEST nvmf_aer 00:23:24.105 ************************************ 00:23:24.105 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:24.368 * Looking for test storage... 00:23:24.368 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:24.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.368 --rc genhtml_branch_coverage=1 00:23:24.368 --rc genhtml_function_coverage=1 00:23:24.368 --rc genhtml_legend=1 00:23:24.368 --rc geninfo_all_blocks=1 00:23:24.368 --rc geninfo_unexecuted_blocks=1 00:23:24.368 00:23:24.368 ' 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:24.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.368 --rc genhtml_branch_coverage=1 00:23:24.368 --rc genhtml_function_coverage=1 00:23:24.368 --rc genhtml_legend=1 00:23:24.368 --rc geninfo_all_blocks=1 00:23:24.368 --rc geninfo_unexecuted_blocks=1 00:23:24.368 00:23:24.368 ' 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:24.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.368 --rc genhtml_branch_coverage=1 00:23:24.368 --rc genhtml_function_coverage=1 00:23:24.368 --rc genhtml_legend=1 00:23:24.368 --rc geninfo_all_blocks=1 00:23:24.368 --rc geninfo_unexecuted_blocks=1 00:23:24.368 00:23:24.368 ' 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:24.368 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:24.368 --rc genhtml_branch_coverage=1 00:23:24.368 --rc genhtml_function_coverage=1 00:23:24.368 --rc genhtml_legend=1 00:23:24.368 --rc geninfo_all_blocks=1 00:23:24.368 --rc geninfo_unexecuted_blocks=1 00:23:24.368 00:23:24.368 ' 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:24.368 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:24.369 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:23:24.369 04:34:37 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:32.512 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.512 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:32.513 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:32.513 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:32.513 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.513 04:34:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.709 ms 00:23:32.513 00:23:32.513 --- 10.0.0.2 ping statistics --- 00:23:32.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.513 rtt min/avg/max/mdev = 0.709/0.709/0.709/0.000 ms 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:23:32.513 00:23:32.513 --- 10.0.0.1 ping statistics --- 00:23:32.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.513 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=3073618 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 3073618 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 3073618 ']' 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:32.513 04:34:45 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.513 [2024-11-05 04:34:45.352773] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:23:32.513 [2024-11-05 04:34:45.352838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.513 [2024-11-05 04:34:45.434569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.513 [2024-11-05 04:34:45.476640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.513 [2024-11-05 04:34:45.476679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.513 [2024-11-05 04:34:45.476687] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.513 [2024-11-05 04:34:45.476694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.513 [2024-11-05 04:34:45.476700] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.513 [2024-11-05 04:34:45.478542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.513 [2024-11-05 04:34:45.478660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.513 [2024-11-05 04:34:45.478780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.513 [2024-11-05 04:34:45.478780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 [2024-11-05 04:34:46.209665] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 Malloc0 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 [2024-11-05 04:34:46.276292] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.776 [ 00:23:32.776 { 00:23:32.776 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:32.776 "subtype": "Discovery", 00:23:32.776 "listen_addresses": [], 00:23:32.776 "allow_any_host": true, 00:23:32.776 "hosts": [] 00:23:32.776 }, 00:23:32.776 { 00:23:32.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.776 "subtype": "NVMe", 00:23:32.776 "listen_addresses": [ 00:23:32.776 { 00:23:32.776 "trtype": "TCP", 00:23:32.776 "adrfam": "IPv4", 00:23:32.776 "traddr": "10.0.0.2", 00:23:32.776 "trsvcid": "4420" 00:23:32.776 } 00:23:32.776 ], 00:23:32.776 "allow_any_host": true, 00:23:32.776 "hosts": [], 00:23:32.776 "serial_number": "SPDK00000000000001", 00:23:32.776 "model_number": "SPDK bdev Controller", 00:23:32.776 "max_namespaces": 2, 00:23:32.776 "min_cntlid": 1, 00:23:32.776 "max_cntlid": 65519, 00:23:32.776 "namespaces": [ 00:23:32.776 { 00:23:32.776 "nsid": 1, 00:23:32.776 "bdev_name": "Malloc0", 00:23:32.776 "name": "Malloc0", 00:23:32.776 "nguid": "A0332B4389904E758BD423EC646BCD15", 00:23:32.776 "uuid": "a0332b43-8990-4e75-8bd4-23ec646bcd15" 00:23:32.776 } 00:23:32.776 ] 00:23:32.776 } 00:23:32.776 ] 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3073904 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:23:32.776 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.038 Malloc1 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.038 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.038 Asynchronous Event Request test 00:23:33.038 Attaching to 10.0.0.2 00:23:33.038 Attached to 10.0.0.2 00:23:33.038 Registering asynchronous event callbacks... 00:23:33.038 Starting namespace attribute notice tests for all controllers... 00:23:33.038 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:33.038 aer_cb - Changed Namespace 00:23:33.038 Cleaning up... 00:23:33.038 [ 00:23:33.038 { 00:23:33.038 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:33.038 "subtype": "Discovery", 00:23:33.038 "listen_addresses": [], 00:23:33.038 "allow_any_host": true, 00:23:33.038 "hosts": [] 00:23:33.038 }, 00:23:33.038 { 00:23:33.038 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.038 "subtype": "NVMe", 00:23:33.038 "listen_addresses": [ 00:23:33.038 { 00:23:33.038 "trtype": "TCP", 00:23:33.038 "adrfam": "IPv4", 00:23:33.038 "traddr": "10.0.0.2", 00:23:33.038 "trsvcid": "4420" 00:23:33.038 } 00:23:33.038 ], 00:23:33.038 "allow_any_host": true, 00:23:33.038 "hosts": [], 00:23:33.038 "serial_number": "SPDK00000000000001", 00:23:33.038 "model_number": "SPDK bdev Controller", 00:23:33.038 "max_namespaces": 2, 00:23:33.038 "min_cntlid": 1, 00:23:33.038 "max_cntlid": 65519, 00:23:33.038 "namespaces": [ 00:23:33.038 { 00:23:33.038 "nsid": 1, 00:23:33.038 "bdev_name": "Malloc0", 00:23:33.038 "name": "Malloc0", 00:23:33.038 "nguid": "A0332B4389904E758BD423EC646BCD15", 00:23:33.038 "uuid": "a0332b43-8990-4e75-8bd4-23ec646bcd15" 00:23:33.038 }, 00:23:33.038 { 00:23:33.038 "nsid": 2, 00:23:33.038 "bdev_name": "Malloc1", 00:23:33.038 "name": "Malloc1", 00:23:33.038 "nguid": "FB416FE8312744AB95A6E1E9DE3CE29B", 00:23:33.038 "uuid": "fb416fe8-3127-44ab-95a6-e1e9de3ce29b" 00:23:33.038 } 00:23:33.038 ] 00:23:33.038 } 00:23:33.039 ] 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3073904 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:33.039 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:33.039 rmmod nvme_tcp 00:23:33.039 rmmod nvme_fabrics 00:23:33.039 rmmod nvme_keyring 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 3073618 ']' 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 3073618 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 3073618 ']' 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 3073618 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3073618 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:33.299 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3073618' 00:23:33.300 killing process with pid 3073618 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 3073618 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 3073618 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:33.300 04:34:46 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.844 04:34:48 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:35.844 00:23:35.844 real 0m11.299s 00:23:35.844 user 0m7.907s 00:23:35.844 sys 0m5.966s 00:23:35.844 04:34:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:35.844 04:34:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:35.844 ************************************ 00:23:35.844 END TEST nvmf_aer 00:23:35.844 ************************************ 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.844 ************************************ 00:23:35.844 START TEST nvmf_async_init 00:23:35.844 ************************************ 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:35.844 * Looking for test storage... 00:23:35.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:35.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.844 --rc genhtml_branch_coverage=1 00:23:35.844 --rc genhtml_function_coverage=1 00:23:35.844 --rc genhtml_legend=1 00:23:35.844 --rc geninfo_all_blocks=1 00:23:35.844 --rc geninfo_unexecuted_blocks=1 00:23:35.844 00:23:35.844 ' 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:35.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.844 --rc genhtml_branch_coverage=1 00:23:35.844 --rc genhtml_function_coverage=1 00:23:35.844 --rc genhtml_legend=1 00:23:35.844 --rc geninfo_all_blocks=1 00:23:35.844 --rc geninfo_unexecuted_blocks=1 00:23:35.844 00:23:35.844 ' 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:35.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.844 --rc genhtml_branch_coverage=1 00:23:35.844 --rc genhtml_function_coverage=1 00:23:35.844 --rc genhtml_legend=1 00:23:35.844 --rc geninfo_all_blocks=1 00:23:35.844 --rc geninfo_unexecuted_blocks=1 00:23:35.844 00:23:35.844 ' 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:35.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.844 --rc genhtml_branch_coverage=1 00:23:35.844 --rc genhtml_function_coverage=1 00:23:35.844 --rc genhtml_legend=1 00:23:35.844 --rc geninfo_all_blocks=1 00:23:35.844 --rc geninfo_unexecuted_blocks=1 00:23:35.844 00:23:35.844 ' 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:35.844 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:35.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=00852d59ec564910b280fd6f82bf9b93 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:23:35.845 04:34:49 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:43.998 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:43.998 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.998 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:43.999 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:43.999 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:43.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.679 ms 00:23:43.999 00:23:43.999 --- 10.0.0.2 ping statistics --- 00:23:43.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.999 rtt min/avg/max/mdev = 0.679/0.679/0.679/0.000 ms 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:23:43.999 00:23:43.999 --- 10.0.0.1 ping statistics --- 00:23:43.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.999 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=3077987 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 3077987 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 3077987 ']' 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:43.999 04:34:56 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.999 [2024-11-05 04:34:56.570397] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:23:43.999 [2024-11-05 04:34:56.570494] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:43.999 [2024-11-05 04:34:56.656457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.999 [2024-11-05 04:34:56.696903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:43.999 [2024-11-05 04:34:56.696941] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:43.999 [2024-11-05 04:34:56.696949] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:43.999 [2024-11-05 04:34:56.696956] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:43.999 [2024-11-05 04:34:56.696962] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:43.999 [2024-11-05 04:34:56.697561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.999 [2024-11-05 04:34:57.398696] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.999 null0 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:43.999 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 00852d59ec564910b280fd6f82bf9b93 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.000 [2024-11-05 04:34:57.438928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.000 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.261 nvme0n1 00:23:44.261 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.261 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:44.261 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.261 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.261 [ 00:23:44.261 { 00:23:44.261 "name": "nvme0n1", 00:23:44.261 "aliases": [ 00:23:44.261 "00852d59-ec56-4910-b280-fd6f82bf9b93" 00:23:44.261 ], 00:23:44.261 "product_name": "NVMe disk", 00:23:44.261 "block_size": 512, 00:23:44.261 "num_blocks": 2097152, 00:23:44.261 "uuid": "00852d59-ec56-4910-b280-fd6f82bf9b93", 00:23:44.261 "numa_id": 0, 00:23:44.261 "assigned_rate_limits": { 00:23:44.261 "rw_ios_per_sec": 0, 00:23:44.261 "rw_mbytes_per_sec": 0, 00:23:44.261 "r_mbytes_per_sec": 0, 00:23:44.261 "w_mbytes_per_sec": 0 00:23:44.261 }, 00:23:44.261 "claimed": false, 00:23:44.261 "zoned": false, 00:23:44.261 "supported_io_types": { 00:23:44.261 "read": true, 00:23:44.261 "write": true, 00:23:44.261 "unmap": false, 00:23:44.261 "flush": true, 00:23:44.261 "reset": true, 00:23:44.261 "nvme_admin": true, 00:23:44.261 "nvme_io": true, 00:23:44.261 "nvme_io_md": false, 00:23:44.261 "write_zeroes": true, 00:23:44.261 "zcopy": false, 00:23:44.261 "get_zone_info": false, 00:23:44.261 "zone_management": false, 00:23:44.261 "zone_append": false, 00:23:44.261 "compare": true, 00:23:44.261 "compare_and_write": true, 00:23:44.261 "abort": true, 00:23:44.261 "seek_hole": false, 00:23:44.261 "seek_data": false, 00:23:44.261 "copy": true, 00:23:44.261 "nvme_iov_md": false 00:23:44.261 }, 00:23:44.261 "memory_domains": [ 00:23:44.261 { 00:23:44.261 "dma_device_id": "system", 00:23:44.261 "dma_device_type": 1 00:23:44.261 } 00:23:44.261 ], 00:23:44.262 "driver_specific": { 00:23:44.262 "nvme": [ 00:23:44.262 { 00:23:44.262 "trid": { 00:23:44.262 "trtype": "TCP", 00:23:44.262 "adrfam": "IPv4", 00:23:44.262 "traddr": "10.0.0.2", 00:23:44.262 "trsvcid": "4420", 00:23:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:44.262 }, 00:23:44.262 "ctrlr_data": { 00:23:44.262 "cntlid": 1, 00:23:44.262 "vendor_id": "0x8086", 00:23:44.262 "model_number": "SPDK bdev Controller", 00:23:44.262 "serial_number": "00000000000000000000", 00:23:44.262 "firmware_revision": "25.01", 00:23:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.262 "oacs": { 00:23:44.262 "security": 0, 00:23:44.262 "format": 0, 00:23:44.262 "firmware": 0, 00:23:44.262 "ns_manage": 0 00:23:44.262 }, 00:23:44.262 "multi_ctrlr": true, 00:23:44.262 "ana_reporting": false 00:23:44.262 }, 00:23:44.262 "vs": { 00:23:44.262 "nvme_version": "1.3" 00:23:44.262 }, 00:23:44.262 "ns_data": { 00:23:44.262 "id": 1, 00:23:44.262 "can_share": true 00:23:44.262 } 00:23:44.262 } 00:23:44.262 ], 00:23:44.262 "mp_policy": "active_passive" 00:23:44.262 } 00:23:44.262 } 00:23:44.262 ] 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.262 [2024-11-05 04:34:57.695994] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:44.262 [2024-11-05 04:34:57.696057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe04f60 (9): Bad file descriptor 00:23:44.262 [2024-11-05 04:34:57.827846] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.262 [ 00:23:44.262 { 00:23:44.262 "name": "nvme0n1", 00:23:44.262 "aliases": [ 00:23:44.262 "00852d59-ec56-4910-b280-fd6f82bf9b93" 00:23:44.262 ], 00:23:44.262 "product_name": "NVMe disk", 00:23:44.262 "block_size": 512, 00:23:44.262 "num_blocks": 2097152, 00:23:44.262 "uuid": "00852d59-ec56-4910-b280-fd6f82bf9b93", 00:23:44.262 "numa_id": 0, 00:23:44.262 "assigned_rate_limits": { 00:23:44.262 "rw_ios_per_sec": 0, 00:23:44.262 "rw_mbytes_per_sec": 0, 00:23:44.262 "r_mbytes_per_sec": 0, 00:23:44.262 "w_mbytes_per_sec": 0 00:23:44.262 }, 00:23:44.262 "claimed": false, 00:23:44.262 "zoned": false, 00:23:44.262 "supported_io_types": { 00:23:44.262 "read": true, 00:23:44.262 "write": true, 00:23:44.262 "unmap": false, 00:23:44.262 "flush": true, 00:23:44.262 "reset": true, 00:23:44.262 "nvme_admin": true, 00:23:44.262 "nvme_io": true, 00:23:44.262 "nvme_io_md": false, 00:23:44.262 "write_zeroes": true, 00:23:44.262 "zcopy": false, 00:23:44.262 "get_zone_info": false, 00:23:44.262 "zone_management": false, 00:23:44.262 "zone_append": false, 00:23:44.262 "compare": true, 00:23:44.262 "compare_and_write": true, 00:23:44.262 "abort": true, 00:23:44.262 "seek_hole": false, 00:23:44.262 "seek_data": false, 00:23:44.262 "copy": true, 00:23:44.262 "nvme_iov_md": false 00:23:44.262 }, 00:23:44.262 "memory_domains": [ 00:23:44.262 { 00:23:44.262 "dma_device_id": "system", 00:23:44.262 "dma_device_type": 1 00:23:44.262 } 00:23:44.262 ], 00:23:44.262 "driver_specific": { 00:23:44.262 "nvme": [ 00:23:44.262 { 00:23:44.262 "trid": { 00:23:44.262 "trtype": "TCP", 00:23:44.262 "adrfam": "IPv4", 00:23:44.262 "traddr": "10.0.0.2", 00:23:44.262 "trsvcid": "4420", 00:23:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:44.262 }, 00:23:44.262 "ctrlr_data": { 00:23:44.262 "cntlid": 2, 00:23:44.262 "vendor_id": "0x8086", 00:23:44.262 "model_number": "SPDK bdev Controller", 00:23:44.262 "serial_number": "00000000000000000000", 00:23:44.262 "firmware_revision": "25.01", 00:23:44.262 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.262 "oacs": { 00:23:44.262 "security": 0, 00:23:44.262 "format": 0, 00:23:44.262 "firmware": 0, 00:23:44.262 "ns_manage": 0 00:23:44.262 }, 00:23:44.262 "multi_ctrlr": true, 00:23:44.262 "ana_reporting": false 00:23:44.262 }, 00:23:44.262 "vs": { 00:23:44.262 "nvme_version": "1.3" 00:23:44.262 }, 00:23:44.262 "ns_data": { 00:23:44.262 "id": 1, 00:23:44.262 "can_share": true 00:23:44.262 } 00:23:44.262 } 00:23:44.262 ], 00:23:44.262 "mp_policy": "active_passive" 00:23:44.262 } 00:23:44.262 } 00:23:44.262 ] 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.gugrjS7Op8 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.gugrjS7Op8 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.gugrjS7Op8 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.262 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.262 [2024-11-05 04:34:57.896643] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.262 [2024-11-05 04:34:57.896755] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.524 [2024-11-05 04:34:57.912703] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:44.524 nvme0n1 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.524 04:34:57 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.524 [ 00:23:44.524 { 00:23:44.524 "name": "nvme0n1", 00:23:44.524 "aliases": [ 00:23:44.524 "00852d59-ec56-4910-b280-fd6f82bf9b93" 00:23:44.524 ], 00:23:44.524 "product_name": "NVMe disk", 00:23:44.524 "block_size": 512, 00:23:44.524 "num_blocks": 2097152, 00:23:44.524 "uuid": "00852d59-ec56-4910-b280-fd6f82bf9b93", 00:23:44.524 "numa_id": 0, 00:23:44.524 "assigned_rate_limits": { 00:23:44.524 "rw_ios_per_sec": 0, 00:23:44.524 "rw_mbytes_per_sec": 0, 00:23:44.524 "r_mbytes_per_sec": 0, 00:23:44.524 "w_mbytes_per_sec": 0 00:23:44.524 }, 00:23:44.524 "claimed": false, 00:23:44.524 "zoned": false, 00:23:44.524 "supported_io_types": { 00:23:44.524 "read": true, 00:23:44.524 "write": true, 00:23:44.524 "unmap": false, 00:23:44.524 "flush": true, 00:23:44.524 "reset": true, 00:23:44.524 "nvme_admin": true, 00:23:44.524 "nvme_io": true, 00:23:44.524 "nvme_io_md": false, 00:23:44.524 "write_zeroes": true, 00:23:44.524 "zcopy": false, 00:23:44.524 "get_zone_info": false, 00:23:44.524 "zone_management": false, 00:23:44.524 "zone_append": false, 00:23:44.524 "compare": true, 00:23:44.524 "compare_and_write": true, 00:23:44.524 "abort": true, 00:23:44.524 "seek_hole": false, 00:23:44.524 "seek_data": false, 00:23:44.524 "copy": true, 00:23:44.524 "nvme_iov_md": false 00:23:44.524 }, 00:23:44.524 "memory_domains": [ 00:23:44.524 { 00:23:44.524 "dma_device_id": "system", 00:23:44.524 "dma_device_type": 1 00:23:44.524 } 00:23:44.524 ], 00:23:44.524 "driver_specific": { 00:23:44.524 "nvme": [ 00:23:44.524 { 00:23:44.524 "trid": { 00:23:44.524 "trtype": "TCP", 00:23:44.524 "adrfam": "IPv4", 00:23:44.524 "traddr": "10.0.0.2", 00:23:44.524 "trsvcid": "4421", 00:23:44.524 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:44.524 }, 00:23:44.524 "ctrlr_data": { 00:23:44.524 "cntlid": 3, 00:23:44.524 "vendor_id": "0x8086", 00:23:44.525 "model_number": "SPDK bdev Controller", 00:23:44.525 "serial_number": "00000000000000000000", 00:23:44.525 "firmware_revision": "25.01", 00:23:44.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:44.525 "oacs": { 00:23:44.525 "security": 0, 00:23:44.525 "format": 0, 00:23:44.525 "firmware": 0, 00:23:44.525 "ns_manage": 0 00:23:44.525 }, 00:23:44.525 "multi_ctrlr": true, 00:23:44.525 "ana_reporting": false 00:23:44.525 }, 00:23:44.525 "vs": { 00:23:44.525 "nvme_version": "1.3" 00:23:44.525 }, 00:23:44.525 "ns_data": { 00:23:44.525 "id": 1, 00:23:44.525 "can_share": true 00:23:44.525 } 00:23:44.525 } 00:23:44.525 ], 00:23:44.525 "mp_policy": "active_passive" 00:23:44.525 } 00:23:44.525 } 00:23:44.525 ] 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.gugrjS7Op8 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:44.525 rmmod nvme_tcp 00:23:44.525 rmmod nvme_fabrics 00:23:44.525 rmmod nvme_keyring 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 3077987 ']' 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 3077987 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 3077987 ']' 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 3077987 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:44.525 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3077987 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3077987' 00:23:44.786 killing process with pid 3077987 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 3077987 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 3077987 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.786 04:34:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:47.342 00:23:47.342 real 0m11.317s 00:23:47.342 user 0m4.116s 00:23:47.342 sys 0m5.735s 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:47.342 ************************************ 00:23:47.342 END TEST nvmf_async_init 00:23:47.342 ************************************ 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.342 ************************************ 00:23:47.342 START TEST dma 00:23:47.342 ************************************ 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:47.342 * Looking for test storage... 00:23:47.342 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:47.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.342 --rc genhtml_branch_coverage=1 00:23:47.342 --rc genhtml_function_coverage=1 00:23:47.342 --rc genhtml_legend=1 00:23:47.342 --rc geninfo_all_blocks=1 00:23:47.342 --rc geninfo_unexecuted_blocks=1 00:23:47.342 00:23:47.342 ' 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:47.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.342 --rc genhtml_branch_coverage=1 00:23:47.342 --rc genhtml_function_coverage=1 00:23:47.342 --rc genhtml_legend=1 00:23:47.342 --rc geninfo_all_blocks=1 00:23:47.342 --rc geninfo_unexecuted_blocks=1 00:23:47.342 00:23:47.342 ' 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:47.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.342 --rc genhtml_branch_coverage=1 00:23:47.342 --rc genhtml_function_coverage=1 00:23:47.342 --rc genhtml_legend=1 00:23:47.342 --rc geninfo_all_blocks=1 00:23:47.342 --rc geninfo_unexecuted_blocks=1 00:23:47.342 00:23:47.342 ' 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:47.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.342 --rc genhtml_branch_coverage=1 00:23:47.342 --rc genhtml_function_coverage=1 00:23:47.342 --rc genhtml_legend=1 00:23:47.342 --rc geninfo_all_blocks=1 00:23:47.342 --rc geninfo_unexecuted_blocks=1 00:23:47.342 00:23:47.342 ' 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.342 04:35:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:47.343 00:23:47.343 real 0m0.230s 00:23:47.343 user 0m0.136s 00:23:47.343 sys 0m0.107s 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:47.343 ************************************ 00:23:47.343 END TEST dma 00:23:47.343 ************************************ 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.343 ************************************ 00:23:47.343 START TEST nvmf_identify 00:23:47.343 ************************************ 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:47.343 * Looking for test storage... 00:23:47.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:47.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.343 --rc genhtml_branch_coverage=1 00:23:47.343 --rc genhtml_function_coverage=1 00:23:47.343 --rc genhtml_legend=1 00:23:47.343 --rc geninfo_all_blocks=1 00:23:47.343 --rc geninfo_unexecuted_blocks=1 00:23:47.343 00:23:47.343 ' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:47.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.343 --rc genhtml_branch_coverage=1 00:23:47.343 --rc genhtml_function_coverage=1 00:23:47.343 --rc genhtml_legend=1 00:23:47.343 --rc geninfo_all_blocks=1 00:23:47.343 --rc geninfo_unexecuted_blocks=1 00:23:47.343 00:23:47.343 ' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:47.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.343 --rc genhtml_branch_coverage=1 00:23:47.343 --rc genhtml_function_coverage=1 00:23:47.343 --rc genhtml_legend=1 00:23:47.343 --rc geninfo_all_blocks=1 00:23:47.343 --rc geninfo_unexecuted_blocks=1 00:23:47.343 00:23:47.343 ' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:47.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:47.343 --rc genhtml_branch_coverage=1 00:23:47.343 --rc genhtml_function_coverage=1 00:23:47.343 --rc genhtml_legend=1 00:23:47.343 --rc geninfo_all_blocks=1 00:23:47.343 --rc geninfo_unexecuted_blocks=1 00:23:47.343 00:23:47.343 ' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.343 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:47.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:47.344 04:35:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:55.488 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:55.488 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:55.488 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.488 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:55.489 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:55.489 04:35:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:55.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:23:55.489 00:23:55.489 --- 10.0.0.2 ping statistics --- 00:23:55.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.489 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:55.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:23:55.489 00:23:55.489 --- 10.0.0.1 ping statistics --- 00:23:55.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.489 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3082700 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3082700 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 3082700 ']' 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:55.489 04:35:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.489 [2024-11-05 04:35:08.391461] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:23:55.489 [2024-11-05 04:35:08.391529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.489 [2024-11-05 04:35:08.475980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:55.489 [2024-11-05 04:35:08.518703] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.489 [2024-11-05 04:35:08.518740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.489 [2024-11-05 04:35:08.518755] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.489 [2024-11-05 04:35:08.518763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.489 [2024-11-05 04:35:08.518769] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.489 [2024-11-05 04:35:08.520601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.489 [2024-11-05 04:35:08.520740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.489 [2024-11-05 04:35:08.520898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:55.489 [2024-11-05 04:35:08.521030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.751 [2024-11-05 04:35:09.191607] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.751 Malloc0 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.751 [2024-11-05 04:35:09.304102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:55.751 [ 00:23:55.751 { 00:23:55.751 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:55.751 "subtype": "Discovery", 00:23:55.751 "listen_addresses": [ 00:23:55.751 { 00:23:55.751 "trtype": "TCP", 00:23:55.751 "adrfam": "IPv4", 00:23:55.751 "traddr": "10.0.0.2", 00:23:55.751 "trsvcid": "4420" 00:23:55.751 } 00:23:55.751 ], 00:23:55.751 "allow_any_host": true, 00:23:55.751 "hosts": [] 00:23:55.751 }, 00:23:55.751 { 00:23:55.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:55.751 "subtype": "NVMe", 00:23:55.751 "listen_addresses": [ 00:23:55.751 { 00:23:55.751 "trtype": "TCP", 00:23:55.751 "adrfam": "IPv4", 00:23:55.751 "traddr": "10.0.0.2", 00:23:55.751 "trsvcid": "4420" 00:23:55.751 } 00:23:55.751 ], 00:23:55.751 "allow_any_host": true, 00:23:55.751 "hosts": [], 00:23:55.751 "serial_number": "SPDK00000000000001", 00:23:55.751 "model_number": "SPDK bdev Controller", 00:23:55.751 "max_namespaces": 32, 00:23:55.751 "min_cntlid": 1, 00:23:55.751 "max_cntlid": 65519, 00:23:55.751 "namespaces": [ 00:23:55.751 { 00:23:55.751 "nsid": 1, 00:23:55.751 "bdev_name": "Malloc0", 00:23:55.751 "name": "Malloc0", 00:23:55.751 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:55.751 "eui64": "ABCDEF0123456789", 00:23:55.751 "uuid": "381e392d-8bce-4658-a6ae-63e4c82161f4" 00:23:55.751 } 00:23:55.751 ] 00:23:55.751 } 00:23:55.751 ] 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.751 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:55.751 [2024-11-05 04:35:09.367353] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:23:55.751 [2024-11-05 04:35:09.367393] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3082931 ] 00:23:56.016 [2024-11-05 04:35:09.419849] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:56.016 [2024-11-05 04:35:09.419903] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:56.016 [2024-11-05 04:35:09.419908] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:56.016 [2024-11-05 04:35:09.419920] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:56.016 [2024-11-05 04:35:09.419929] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:56.016 [2024-11-05 04:35:09.424073] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:56.016 [2024-11-05 04:35:09.424108] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1dfd690 0 00:23:56.016 [2024-11-05 04:35:09.431755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:56.016 [2024-11-05 04:35:09.431768] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:56.016 [2024-11-05 04:35:09.431773] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:56.016 [2024-11-05 04:35:09.431777] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:56.016 [2024-11-05 04:35:09.431809] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.016 [2024-11-05 04:35:09.431815] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.016 [2024-11-05 04:35:09.431819] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.016 [2024-11-05 04:35:09.431833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:56.016 [2024-11-05 04:35:09.431851] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.016 [2024-11-05 04:35:09.438759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.016 [2024-11-05 04:35:09.438769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.016 [2024-11-05 04:35:09.438772] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.016 [2024-11-05 04:35:09.438777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.016 [2024-11-05 04:35:09.438790] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:56.016 [2024-11-05 04:35:09.438798] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:56.016 [2024-11-05 04:35:09.438804] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:56.016 [2024-11-05 04:35:09.438818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.016 [2024-11-05 04:35:09.438822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.016 [2024-11-05 04:35:09.438825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.016 [2024-11-05 04:35:09.438834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.016 [2024-11-05 04:35:09.438848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.016 [2024-11-05 04:35:09.439019] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.016 [2024-11-05 04:35:09.439026] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.016 [2024-11-05 04:35:09.439029] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.016 [2024-11-05 04:35:09.439033] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.016 [2024-11-05 04:35:09.439039] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:56.017 [2024-11-05 04:35:09.439046] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:56.017 [2024-11-05 04:35:09.439053] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439057] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.017 [2024-11-05 04:35:09.439068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.017 [2024-11-05 04:35:09.439078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.017 [2024-11-05 04:35:09.439286] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.017 [2024-11-05 04:35:09.439292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.017 [2024-11-05 04:35:09.439296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439300] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.017 [2024-11-05 04:35:09.439305] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:56.017 [2024-11-05 04:35:09.439313] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:56.017 [2024-11-05 04:35:09.439320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439324] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439327] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.017 [2024-11-05 04:35:09.439334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.017 [2024-11-05 04:35:09.439344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.017 [2024-11-05 04:35:09.439555] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.017 [2024-11-05 04:35:09.439562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.017 [2024-11-05 04:35:09.439565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.017 [2024-11-05 04:35:09.439574] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:56.017 [2024-11-05 04:35:09.439586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439591] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.017 [2024-11-05 04:35:09.439601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.017 [2024-11-05 04:35:09.439612] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.017 [2024-11-05 04:35:09.439787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.017 [2024-11-05 04:35:09.439794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.017 [2024-11-05 04:35:09.439798] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439804] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.017 [2024-11-05 04:35:09.439809] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:56.017 [2024-11-05 04:35:09.439814] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:56.017 [2024-11-05 04:35:09.439822] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:56.017 [2024-11-05 04:35:09.439927] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:56.017 [2024-11-05 04:35:09.439932] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:56.017 [2024-11-05 04:35:09.439941] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439945] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.439948] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.017 [2024-11-05 04:35:09.439955] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.017 [2024-11-05 04:35:09.439966] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.017 [2024-11-05 04:35:09.440152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.017 [2024-11-05 04:35:09.440159] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.017 [2024-11-05 04:35:09.440162] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440166] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.017 [2024-11-05 04:35:09.440171] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:56.017 [2024-11-05 04:35:09.440181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440188] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.017 [2024-11-05 04:35:09.440195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.017 [2024-11-05 04:35:09.440205] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.017 [2024-11-05 04:35:09.440367] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.017 [2024-11-05 04:35:09.440374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.017 [2024-11-05 04:35:09.440377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.017 [2024-11-05 04:35:09.440386] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:56.017 [2024-11-05 04:35:09.440391] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:56.017 [2024-11-05 04:35:09.440399] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:56.017 [2024-11-05 04:35:09.440406] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:56.017 [2024-11-05 04:35:09.440415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.017 [2024-11-05 04:35:09.440430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.017 [2024-11-05 04:35:09.440441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.017 [2024-11-05 04:35:09.440686] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.017 [2024-11-05 04:35:09.440692] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.017 [2024-11-05 04:35:09.440696] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440700] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfd690): datao=0, datal=4096, cccid=0 00:23:56.017 [2024-11-05 04:35:09.440705] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5f100) on tqpair(0x1dfd690): expected_datao=0, payload_size=4096 00:23:56.017 [2024-11-05 04:35:09.440709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440722] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440727] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.017 [2024-11-05 04:35:09.440886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.017 [2024-11-05 04:35:09.440889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440893] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.017 [2024-11-05 04:35:09.440901] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:56.017 [2024-11-05 04:35:09.440906] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:56.017 [2024-11-05 04:35:09.440911] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:56.017 [2024-11-05 04:35:09.440916] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:56.017 [2024-11-05 04:35:09.440921] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:56.017 [2024-11-05 04:35:09.440926] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:56.017 [2024-11-05 04:35:09.440935] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:56.017 [2024-11-05 04:35:09.440942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.017 [2024-11-05 04:35:09.440949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.017 [2024-11-05 04:35:09.440956] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:56.017 [2024-11-05 04:35:09.440967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.017 [2024-11-05 04:35:09.441157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.017 [2024-11-05 04:35:09.441164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.017 [2024-11-05 04:35:09.441167] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.018 [2024-11-05 04:35:09.441183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1dfd690) 00:23:56.018 [2024-11-05 04:35:09.441199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.018 [2024-11-05 04:35:09.441205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441209] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1dfd690) 00:23:56.018 [2024-11-05 04:35:09.441219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.018 [2024-11-05 04:35:09.441225] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1dfd690) 00:23:56.018 [2024-11-05 04:35:09.441238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.018 [2024-11-05 04:35:09.441245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.018 [2024-11-05 04:35:09.441258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.018 [2024-11-05 04:35:09.441263] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:56.018 [2024-11-05 04:35:09.441271] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:56.018 [2024-11-05 04:35:09.441278] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfd690) 00:23:56.018 [2024-11-05 04:35:09.441288] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.018 [2024-11-05 04:35:09.441300] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f100, cid 0, qid 0 00:23:56.018 [2024-11-05 04:35:09.441305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f280, cid 1, qid 0 00:23:56.018 [2024-11-05 04:35:09.441310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f400, cid 2, qid 0 00:23:56.018 [2024-11-05 04:35:09.441315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.018 [2024-11-05 04:35:09.441320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f700, cid 4, qid 0 00:23:56.018 [2024-11-05 04:35:09.441553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.018 [2024-11-05 04:35:09.441560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.018 [2024-11-05 04:35:09.441563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f700) on tqpair=0x1dfd690 00:23:56.018 [2024-11-05 04:35:09.441574] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:56.018 [2024-11-05 04:35:09.441580] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:56.018 [2024-11-05 04:35:09.441590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfd690) 00:23:56.018 [2024-11-05 04:35:09.441600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.018 [2024-11-05 04:35:09.441613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f700, cid 4, qid 0 00:23:56.018 [2024-11-05 04:35:09.441805] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.018 [2024-11-05 04:35:09.441812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.018 [2024-11-05 04:35:09.441816] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441819] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfd690): datao=0, datal=4096, cccid=4 00:23:56.018 [2024-11-05 04:35:09.441824] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5f700) on tqpair(0x1dfd690): expected_datao=0, payload_size=4096 00:23:56.018 [2024-11-05 04:35:09.441828] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441843] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.441847] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.485755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.018 [2024-11-05 04:35:09.485766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.018 [2024-11-05 04:35:09.485770] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.485774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f700) on tqpair=0x1dfd690 00:23:56.018 [2024-11-05 04:35:09.485787] nvme_ctrlr.c:4166:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:56.018 [2024-11-05 04:35:09.485813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.485818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfd690) 00:23:56.018 [2024-11-05 04:35:09.485826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.018 [2024-11-05 04:35:09.485833] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.485837] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.485841] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1dfd690) 00:23:56.018 [2024-11-05 04:35:09.485847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.018 [2024-11-05 04:35:09.485863] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f700, cid 4, qid 0 00:23:56.018 [2024-11-05 04:35:09.485868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f880, cid 5, qid 0 00:23:56.018 [2024-11-05 04:35:09.486141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.018 [2024-11-05 04:35:09.486147] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.018 [2024-11-05 04:35:09.486151] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.486154] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfd690): datao=0, datal=1024, cccid=4 00:23:56.018 [2024-11-05 04:35:09.486159] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5f700) on tqpair(0x1dfd690): expected_datao=0, payload_size=1024 00:23:56.018 [2024-11-05 04:35:09.486163] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.486170] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.486174] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.486180] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.018 [2024-11-05 04:35:09.486185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.018 [2024-11-05 04:35:09.486189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.486193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f880) on tqpair=0x1dfd690 00:23:56.018 [2024-11-05 04:35:09.526940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.018 [2024-11-05 04:35:09.526949] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.018 [2024-11-05 04:35:09.526956] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.526960] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f700) on tqpair=0x1dfd690 00:23:56.018 [2024-11-05 04:35:09.526971] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.526975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfd690) 00:23:56.018 [2024-11-05 04:35:09.526982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.018 [2024-11-05 04:35:09.526997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f700, cid 4, qid 0 00:23:56.018 [2024-11-05 04:35:09.527278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.018 [2024-11-05 04:35:09.527284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.018 [2024-11-05 04:35:09.527288] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.527291] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfd690): datao=0, datal=3072, cccid=4 00:23:56.018 [2024-11-05 04:35:09.527296] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5f700) on tqpair(0x1dfd690): expected_datao=0, payload_size=3072 00:23:56.018 [2024-11-05 04:35:09.527300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.527307] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.527311] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.527447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.018 [2024-11-05 04:35:09.527453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.018 [2024-11-05 04:35:09.527456] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.018 [2024-11-05 04:35:09.527460] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f700) on tqpair=0x1dfd690 00:23:56.018 [2024-11-05 04:35:09.527469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.019 [2024-11-05 04:35:09.527473] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1dfd690) 00:23:56.019 [2024-11-05 04:35:09.527479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.019 [2024-11-05 04:35:09.527493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f700, cid 4, qid 0 00:23:56.019 [2024-11-05 04:35:09.527776] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.019 [2024-11-05 04:35:09.527782] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.019 [2024-11-05 04:35:09.527786] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.019 [2024-11-05 04:35:09.527789] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1dfd690): datao=0, datal=8, cccid=4 00:23:56.019 [2024-11-05 04:35:09.527794] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1e5f700) on tqpair(0x1dfd690): expected_datao=0, payload_size=8 00:23:56.019 [2024-11-05 04:35:09.527798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.019 [2024-11-05 04:35:09.527805] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.019 [2024-11-05 04:35:09.527808] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.019 [2024-11-05 04:35:09.571755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.019 [2024-11-05 04:35:09.571764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.019 [2024-11-05 04:35:09.571768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.019 [2024-11-05 04:35:09.571772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f700) on tqpair=0x1dfd690 00:23:56.019 ===================================================== 00:23:56.019 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:56.019 ===================================================== 00:23:56.019 Controller Capabilities/Features 00:23:56.019 ================================ 00:23:56.019 Vendor ID: 0000 00:23:56.019 Subsystem Vendor ID: 0000 00:23:56.019 Serial Number: .................... 00:23:56.019 Model Number: ........................................ 00:23:56.019 Firmware Version: 25.01 00:23:56.019 Recommended Arb Burst: 0 00:23:56.019 IEEE OUI Identifier: 00 00 00 00:23:56.019 Multi-path I/O 00:23:56.019 May have multiple subsystem ports: No 00:23:56.019 May have multiple controllers: No 00:23:56.019 Associated with SR-IOV VF: No 00:23:56.019 Max Data Transfer Size: 131072 00:23:56.019 Max Number of Namespaces: 0 00:23:56.019 Max Number of I/O Queues: 1024 00:23:56.019 NVMe Specification Version (VS): 1.3 00:23:56.019 NVMe Specification Version (Identify): 1.3 00:23:56.019 Maximum Queue Entries: 128 00:23:56.019 Contiguous Queues Required: Yes 00:23:56.019 Arbitration Mechanisms Supported 00:23:56.019 Weighted Round Robin: Not Supported 00:23:56.019 Vendor Specific: Not Supported 00:23:56.019 Reset Timeout: 15000 ms 00:23:56.019 Doorbell Stride: 4 bytes 00:23:56.019 NVM Subsystem Reset: Not Supported 00:23:56.019 Command Sets Supported 00:23:56.019 NVM Command Set: Supported 00:23:56.019 Boot Partition: Not Supported 00:23:56.019 Memory Page Size Minimum: 4096 bytes 00:23:56.019 Memory Page Size Maximum: 4096 bytes 00:23:56.019 Persistent Memory Region: Not Supported 00:23:56.019 Optional Asynchronous Events Supported 00:23:56.019 Namespace Attribute Notices: Not Supported 00:23:56.019 Firmware Activation Notices: Not Supported 00:23:56.019 ANA Change Notices: Not Supported 00:23:56.019 PLE Aggregate Log Change Notices: Not Supported 00:23:56.019 LBA Status Info Alert Notices: Not Supported 00:23:56.019 EGE Aggregate Log Change Notices: Not Supported 00:23:56.019 Normal NVM Subsystem Shutdown event: Not Supported 00:23:56.019 Zone Descriptor Change Notices: Not Supported 00:23:56.019 Discovery Log Change Notices: Supported 00:23:56.019 Controller Attributes 00:23:56.019 128-bit Host Identifier: Not Supported 00:23:56.019 Non-Operational Permissive Mode: Not Supported 00:23:56.019 NVM Sets: Not Supported 00:23:56.019 Read Recovery Levels: Not Supported 00:23:56.019 Endurance Groups: Not Supported 00:23:56.019 Predictable Latency Mode: Not Supported 00:23:56.019 Traffic Based Keep ALive: Not Supported 00:23:56.019 Namespace Granularity: Not Supported 00:23:56.019 SQ Associations: Not Supported 00:23:56.019 UUID List: Not Supported 00:23:56.019 Multi-Domain Subsystem: Not Supported 00:23:56.019 Fixed Capacity Management: Not Supported 00:23:56.019 Variable Capacity Management: Not Supported 00:23:56.019 Delete Endurance Group: Not Supported 00:23:56.019 Delete NVM Set: Not Supported 00:23:56.019 Extended LBA Formats Supported: Not Supported 00:23:56.019 Flexible Data Placement Supported: Not Supported 00:23:56.019 00:23:56.019 Controller Memory Buffer Support 00:23:56.019 ================================ 00:23:56.019 Supported: No 00:23:56.019 00:23:56.019 Persistent Memory Region Support 00:23:56.019 ================================ 00:23:56.019 Supported: No 00:23:56.019 00:23:56.019 Admin Command Set Attributes 00:23:56.019 ============================ 00:23:56.019 Security Send/Receive: Not Supported 00:23:56.019 Format NVM: Not Supported 00:23:56.019 Firmware Activate/Download: Not Supported 00:23:56.019 Namespace Management: Not Supported 00:23:56.019 Device Self-Test: Not Supported 00:23:56.019 Directives: Not Supported 00:23:56.019 NVMe-MI: Not Supported 00:23:56.019 Virtualization Management: Not Supported 00:23:56.019 Doorbell Buffer Config: Not Supported 00:23:56.019 Get LBA Status Capability: Not Supported 00:23:56.019 Command & Feature Lockdown Capability: Not Supported 00:23:56.019 Abort Command Limit: 1 00:23:56.019 Async Event Request Limit: 4 00:23:56.019 Number of Firmware Slots: N/A 00:23:56.019 Firmware Slot 1 Read-Only: N/A 00:23:56.019 Firmware Activation Without Reset: N/A 00:23:56.019 Multiple Update Detection Support: N/A 00:23:56.019 Firmware Update Granularity: No Information Provided 00:23:56.019 Per-Namespace SMART Log: No 00:23:56.019 Asymmetric Namespace Access Log Page: Not Supported 00:23:56.019 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:56.019 Command Effects Log Page: Not Supported 00:23:56.019 Get Log Page Extended Data: Supported 00:23:56.019 Telemetry Log Pages: Not Supported 00:23:56.019 Persistent Event Log Pages: Not Supported 00:23:56.019 Supported Log Pages Log Page: May Support 00:23:56.019 Commands Supported & Effects Log Page: Not Supported 00:23:56.019 Feature Identifiers & Effects Log Page:May Support 00:23:56.019 NVMe-MI Commands & Effects Log Page: May Support 00:23:56.019 Data Area 4 for Telemetry Log: Not Supported 00:23:56.019 Error Log Page Entries Supported: 128 00:23:56.019 Keep Alive: Not Supported 00:23:56.019 00:23:56.019 NVM Command Set Attributes 00:23:56.019 ========================== 00:23:56.019 Submission Queue Entry Size 00:23:56.019 Max: 1 00:23:56.019 Min: 1 00:23:56.019 Completion Queue Entry Size 00:23:56.020 Max: 1 00:23:56.020 Min: 1 00:23:56.020 Number of Namespaces: 0 00:23:56.020 Compare Command: Not Supported 00:23:56.020 Write Uncorrectable Command: Not Supported 00:23:56.020 Dataset Management Command: Not Supported 00:23:56.020 Write Zeroes Command: Not Supported 00:23:56.020 Set Features Save Field: Not Supported 00:23:56.020 Reservations: Not Supported 00:23:56.020 Timestamp: Not Supported 00:23:56.020 Copy: Not Supported 00:23:56.020 Volatile Write Cache: Not Present 00:23:56.020 Atomic Write Unit (Normal): 1 00:23:56.020 Atomic Write Unit (PFail): 1 00:23:56.020 Atomic Compare & Write Unit: 1 00:23:56.020 Fused Compare & Write: Supported 00:23:56.020 Scatter-Gather List 00:23:56.020 SGL Command Set: Supported 00:23:56.020 SGL Keyed: Supported 00:23:56.020 SGL Bit Bucket Descriptor: Not Supported 00:23:56.020 SGL Metadata Pointer: Not Supported 00:23:56.020 Oversized SGL: Not Supported 00:23:56.020 SGL Metadata Address: Not Supported 00:23:56.020 SGL Offset: Supported 00:23:56.020 Transport SGL Data Block: Not Supported 00:23:56.020 Replay Protected Memory Block: Not Supported 00:23:56.020 00:23:56.020 Firmware Slot Information 00:23:56.020 ========================= 00:23:56.020 Active slot: 0 00:23:56.020 00:23:56.020 00:23:56.020 Error Log 00:23:56.020 ========= 00:23:56.020 00:23:56.020 Active Namespaces 00:23:56.020 ================= 00:23:56.020 Discovery Log Page 00:23:56.020 ================== 00:23:56.020 Generation Counter: 2 00:23:56.020 Number of Records: 2 00:23:56.020 Record Format: 0 00:23:56.020 00:23:56.020 Discovery Log Entry 0 00:23:56.020 ---------------------- 00:23:56.020 Transport Type: 3 (TCP) 00:23:56.020 Address Family: 1 (IPv4) 00:23:56.020 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:56.020 Entry Flags: 00:23:56.020 Duplicate Returned Information: 1 00:23:56.020 Explicit Persistent Connection Support for Discovery: 1 00:23:56.020 Transport Requirements: 00:23:56.020 Secure Channel: Not Required 00:23:56.020 Port ID: 0 (0x0000) 00:23:56.020 Controller ID: 65535 (0xffff) 00:23:56.020 Admin Max SQ Size: 128 00:23:56.020 Transport Service Identifier: 4420 00:23:56.020 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:56.020 Transport Address: 10.0.0.2 00:23:56.020 Discovery Log Entry 1 00:23:56.020 ---------------------- 00:23:56.020 Transport Type: 3 (TCP) 00:23:56.020 Address Family: 1 (IPv4) 00:23:56.020 Subsystem Type: 2 (NVM Subsystem) 00:23:56.020 Entry Flags: 00:23:56.020 Duplicate Returned Information: 0 00:23:56.020 Explicit Persistent Connection Support for Discovery: 0 00:23:56.020 Transport Requirements: 00:23:56.020 Secure Channel: Not Required 00:23:56.020 Port ID: 0 (0x0000) 00:23:56.020 Controller ID: 65535 (0xffff) 00:23:56.020 Admin Max SQ Size: 128 00:23:56.020 Transport Service Identifier: 4420 00:23:56.020 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:56.020 Transport Address: 10.0.0.2 [2024-11-05 04:35:09.571860] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:56.020 [2024-11-05 04:35:09.571871] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f100) on tqpair=0x1dfd690 00:23:56.020 [2024-11-05 04:35:09.571879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.020 [2024-11-05 04:35:09.571885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f280) on tqpair=0x1dfd690 00:23:56.020 [2024-11-05 04:35:09.571890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.020 [2024-11-05 04:35:09.571895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f400) on tqpair=0x1dfd690 00:23:56.020 [2024-11-05 04:35:09.571899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.020 [2024-11-05 04:35:09.571904] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.020 [2024-11-05 04:35:09.571909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.020 [2024-11-05 04:35:09.571918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.571922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.571925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.020 [2024-11-05 04:35:09.571933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.020 [2024-11-05 04:35:09.571946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.020 [2024-11-05 04:35:09.572135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.020 [2024-11-05 04:35:09.572142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.020 [2024-11-05 04:35:09.572145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.020 [2024-11-05 04:35:09.572159] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572162] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572166] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.020 [2024-11-05 04:35:09.572173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.020 [2024-11-05 04:35:09.572186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.020 [2024-11-05 04:35:09.572393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.020 [2024-11-05 04:35:09.572400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.020 [2024-11-05 04:35:09.572403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.020 [2024-11-05 04:35:09.572412] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:56.020 [2024-11-05 04:35:09.572417] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:56.020 [2024-11-05 04:35:09.572426] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572430] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572433] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.020 [2024-11-05 04:35:09.572440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.020 [2024-11-05 04:35:09.572450] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.020 [2024-11-05 04:35:09.572641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.020 [2024-11-05 04:35:09.572648] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.020 [2024-11-05 04:35:09.572653] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.020 [2024-11-05 04:35:09.572667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.020 [2024-11-05 04:35:09.572681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.020 [2024-11-05 04:35:09.572691] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.020 [2024-11-05 04:35:09.572863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.020 [2024-11-05 04:35:09.572870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.020 [2024-11-05 04:35:09.572873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.020 [2024-11-05 04:35:09.572887] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.020 [2024-11-05 04:35:09.572894] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.020 [2024-11-05 04:35:09.572901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.572911] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.021 [2024-11-05 04:35:09.573085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.021 [2024-11-05 04:35:09.573091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.021 [2024-11-05 04:35:09.573095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.021 [2024-11-05 04:35:09.573108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.021 [2024-11-05 04:35:09.573122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.573133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.021 [2024-11-05 04:35:09.573314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.021 [2024-11-05 04:35:09.573321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.021 [2024-11-05 04:35:09.573324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.021 [2024-11-05 04:35:09.573338] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573341] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573345] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.021 [2024-11-05 04:35:09.573352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.573362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.021 [2024-11-05 04:35:09.573534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.021 [2024-11-05 04:35:09.573541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.021 [2024-11-05 04:35:09.573544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.021 [2024-11-05 04:35:09.573560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.021 [2024-11-05 04:35:09.573574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.573584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.021 [2024-11-05 04:35:09.573781] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.021 [2024-11-05 04:35:09.573788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.021 [2024-11-05 04:35:09.573792] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.021 [2024-11-05 04:35:09.573805] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.573813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.021 [2024-11-05 04:35:09.573819] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.573830] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.021 [2024-11-05 04:35:09.574000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.021 [2024-11-05 04:35:09.574006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.021 [2024-11-05 04:35:09.574009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574013] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.021 [2024-11-05 04:35:09.574023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.021 [2024-11-05 04:35:09.574037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.574047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.021 [2024-11-05 04:35:09.574214] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.021 [2024-11-05 04:35:09.574220] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.021 [2024-11-05 04:35:09.574223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.021 [2024-11-05 04:35:09.574237] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.021 [2024-11-05 04:35:09.574251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.574261] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.021 [2024-11-05 04:35:09.574437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.021 [2024-11-05 04:35:09.574443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.021 [2024-11-05 04:35:09.574447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.021 [2024-11-05 04:35:09.574462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.021 [2024-11-05 04:35:09.574477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.574487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.021 [2024-11-05 04:35:09.574653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.021 [2024-11-05 04:35:09.574659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.021 [2024-11-05 04:35:09.574663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.021 [2024-11-05 04:35:09.574676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.021 [2024-11-05 04:35:09.574690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.574700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.021 [2024-11-05 04:35:09.574900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.021 [2024-11-05 04:35:09.574907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.021 [2024-11-05 04:35:09.574910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.021 [2024-11-05 04:35:09.574924] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.021 [2024-11-05 04:35:09.574931] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.021 [2024-11-05 04:35:09.574938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.021 [2024-11-05 04:35:09.574949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.022 [2024-11-05 04:35:09.575146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.022 [2024-11-05 04:35:09.575152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.022 [2024-11-05 04:35:09.575155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.575159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.022 [2024-11-05 04:35:09.575169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.575173] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.575176] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.022 [2024-11-05 04:35:09.575183] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.022 [2024-11-05 04:35:09.575193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.022 [2024-11-05 04:35:09.575365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.022 [2024-11-05 04:35:09.575372] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.022 [2024-11-05 04:35:09.575375] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.575379] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.022 [2024-11-05 04:35:09.575389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.575394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.575398] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.022 [2024-11-05 04:35:09.575405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.022 [2024-11-05 04:35:09.575415] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.022 [2024-11-05 04:35:09.575657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.022 [2024-11-05 04:35:09.575664] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.022 [2024-11-05 04:35:09.575668] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.575672] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.022 [2024-11-05 04:35:09.575681] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.575685] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.575689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1dfd690) 00:23:56.022 [2024-11-05 04:35:09.575696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.022 [2024-11-05 04:35:09.575705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1e5f580, cid 3, qid 0 00:23:56.022 [2024-11-05 04:35:09.579755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.022 [2024-11-05 04:35:09.579764] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.022 [2024-11-05 04:35:09.579768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.022 [2024-11-05 04:35:09.579771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1e5f580) on tqpair=0x1dfd690 00:23:56.022 [2024-11-05 04:35:09.579779] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:23:56.022 00:23:56.022 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:56.022 [2024-11-05 04:35:09.623970] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:23:56.022 [2024-11-05 04:35:09.624015] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083054 ] 00:23:56.287 [2024-11-05 04:35:09.677813] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:56.287 [2024-11-05 04:35:09.677866] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:56.287 [2024-11-05 04:35:09.677871] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:56.287 [2024-11-05 04:35:09.677883] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:56.287 [2024-11-05 04:35:09.677891] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:56.287 [2024-11-05 04:35:09.681957] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:56.287 [2024-11-05 04:35:09.681985] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x216e690 0 00:23:56.287 [2024-11-05 04:35:09.689755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:56.287 [2024-11-05 04:35:09.689767] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:56.287 [2024-11-05 04:35:09.689771] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:56.287 [2024-11-05 04:35:09.689778] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:56.287 [2024-11-05 04:35:09.689807] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.689812] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.689816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.287 [2024-11-05 04:35:09.689829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:56.287 [2024-11-05 04:35:09.689846] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.287 [2024-11-05 04:35:09.696754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.287 [2024-11-05 04:35:09.696763] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.287 [2024-11-05 04:35:09.696767] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.696771] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.287 [2024-11-05 04:35:09.696783] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:56.287 [2024-11-05 04:35:09.696790] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:56.287 [2024-11-05 04:35:09.696796] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:56.287 [2024-11-05 04:35:09.696808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.696813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.696816] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.287 [2024-11-05 04:35:09.696824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.287 [2024-11-05 04:35:09.696837] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.287 [2024-11-05 04:35:09.697030] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.287 [2024-11-05 04:35:09.697037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.287 [2024-11-05 04:35:09.697040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.287 [2024-11-05 04:35:09.697049] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:56.287 [2024-11-05 04:35:09.697056] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:56.287 [2024-11-05 04:35:09.697063] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.287 [2024-11-05 04:35:09.697078] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.287 [2024-11-05 04:35:09.697088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.287 [2024-11-05 04:35:09.697299] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.287 [2024-11-05 04:35:09.697305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.287 [2024-11-05 04:35:09.697308] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697312] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.287 [2024-11-05 04:35:09.697317] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:56.287 [2024-11-05 04:35:09.697325] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:56.287 [2024-11-05 04:35:09.697336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.287 [2024-11-05 04:35:09.697350] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.287 [2024-11-05 04:35:09.697361] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.287 [2024-11-05 04:35:09.697554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.287 [2024-11-05 04:35:09.697560] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.287 [2024-11-05 04:35:09.697563] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697567] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.287 [2024-11-05 04:35:09.697572] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:56.287 [2024-11-05 04:35:09.697585] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.287 [2024-11-05 04:35:09.697599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.287 [2024-11-05 04:35:09.697609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.287 [2024-11-05 04:35:09.697792] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.287 [2024-11-05 04:35:09.697799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.287 [2024-11-05 04:35:09.697803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.287 [2024-11-05 04:35:09.697811] nvme_ctrlr.c:3870:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:56.287 [2024-11-05 04:35:09.697816] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:56.287 [2024-11-05 04:35:09.697824] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:56.287 [2024-11-05 04:35:09.697929] nvme_ctrlr.c:4068:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:56.287 [2024-11-05 04:35:09.697934] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:56.287 [2024-11-05 04:35:09.697942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.287 [2024-11-05 04:35:09.697949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.288 [2024-11-05 04:35:09.697956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.288 [2024-11-05 04:35:09.697967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.288 [2024-11-05 04:35:09.698155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.288 [2024-11-05 04:35:09.698162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.288 [2024-11-05 04:35:09.698165] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.288 [2024-11-05 04:35:09.698176] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:56.288 [2024-11-05 04:35:09.698185] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698189] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.288 [2024-11-05 04:35:09.698200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.288 [2024-11-05 04:35:09.698210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.288 [2024-11-05 04:35:09.698424] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.288 [2024-11-05 04:35:09.698431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.288 [2024-11-05 04:35:09.698434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.288 [2024-11-05 04:35:09.698443] nvme_ctrlr.c:3905:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:56.288 [2024-11-05 04:35:09.698447] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:56.288 [2024-11-05 04:35:09.698455] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:56.288 [2024-11-05 04:35:09.698462] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:56.288 [2024-11-05 04:35:09.698471] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.288 [2024-11-05 04:35:09.698481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.288 [2024-11-05 04:35:09.698492] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.288 [2024-11-05 04:35:09.698713] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.288 [2024-11-05 04:35:09.698720] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.288 [2024-11-05 04:35:09.698723] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698728] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216e690): datao=0, datal=4096, cccid=0 00:23:56.288 [2024-11-05 04:35:09.698733] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0100) on tqpair(0x216e690): expected_datao=0, payload_size=4096 00:23:56.288 [2024-11-05 04:35:09.698737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698744] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698753] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.288 [2024-11-05 04:35:09.698920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.288 [2024-11-05 04:35:09.698923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698927] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.288 [2024-11-05 04:35:09.698934] nvme_ctrlr.c:2054:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:56.288 [2024-11-05 04:35:09.698939] nvme_ctrlr.c:2058:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:56.288 [2024-11-05 04:35:09.698943] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:56.288 [2024-11-05 04:35:09.698950] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:56.288 [2024-11-05 04:35:09.698954] nvme_ctrlr.c:2100:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:56.288 [2024-11-05 04:35:09.698959] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:56.288 [2024-11-05 04:35:09.698967] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:56.288 [2024-11-05 04:35:09.698974] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.698981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.288 [2024-11-05 04:35:09.698988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:56.288 [2024-11-05 04:35:09.698999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.288 [2024-11-05 04:35:09.699195] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.288 [2024-11-05 04:35:09.699201] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.288 [2024-11-05 04:35:09.699205] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.288 [2024-11-05 04:35:09.699220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699224] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x216e690) 00:23:56.288 [2024-11-05 04:35:09.699234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.288 [2024-11-05 04:35:09.699240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x216e690) 00:23:56.288 [2024-11-05 04:35:09.699253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.288 [2024-11-05 04:35:09.699259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x216e690) 00:23:56.288 [2024-11-05 04:35:09.699273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.288 [2024-11-05 04:35:09.699279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216e690) 00:23:56.288 [2024-11-05 04:35:09.699292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.288 [2024-11-05 04:35:09.699297] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:56.288 [2024-11-05 04:35:09.699305] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:56.288 [2024-11-05 04:35:09.699311] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699314] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216e690) 00:23:56.288 [2024-11-05 04:35:09.699321] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.288 [2024-11-05 04:35:09.699334] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0100, cid 0, qid 0 00:23:56.288 [2024-11-05 04:35:09.699340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0280, cid 1, qid 0 00:23:56.288 [2024-11-05 04:35:09.699345] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0400, cid 2, qid 0 00:23:56.288 [2024-11-05 04:35:09.699349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0580, cid 3, qid 0 00:23:56.288 [2024-11-05 04:35:09.699354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 4, qid 0 00:23:56.288 [2024-11-05 04:35:09.699548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.288 [2024-11-05 04:35:09.699554] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.288 [2024-11-05 04:35:09.699558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x216e690 00:23:56.288 [2024-11-05 04:35:09.699575] nvme_ctrlr.c:3023:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:56.288 [2024-11-05 04:35:09.699580] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:56.288 [2024-11-05 04:35:09.699588] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:56.288 [2024-11-05 04:35:09.699595] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:56.288 [2024-11-05 04:35:09.699601] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.288 [2024-11-05 04:35:09.699605] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.699608] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216e690) 00:23:56.289 [2024-11-05 04:35:09.699615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:56.289 [2024-11-05 04:35:09.699625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 4, qid 0 00:23:56.289 [2024-11-05 04:35:09.699780] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.289 [2024-11-05 04:35:09.699787] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.289 [2024-11-05 04:35:09.699790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.699794] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x216e690 00:23:56.289 [2024-11-05 04:35:09.699858] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.699868] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.699875] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.699879] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216e690) 00:23:56.289 [2024-11-05 04:35:09.699886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.289 [2024-11-05 04:35:09.699896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 4, qid 0 00:23:56.289 [2024-11-05 04:35:09.700055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.289 [2024-11-05 04:35:09.700062] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.289 [2024-11-05 04:35:09.700065] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700069] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216e690): datao=0, datal=4096, cccid=4 00:23:56.289 [2024-11-05 04:35:09.700075] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0700) on tqpair(0x216e690): expected_datao=0, payload_size=4096 00:23:56.289 [2024-11-05 04:35:09.700080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700087] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700090] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.289 [2024-11-05 04:35:09.700219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.289 [2024-11-05 04:35:09.700223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x216e690 00:23:56.289 [2024-11-05 04:35:09.700235] nvme_ctrlr.c:4699:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:56.289 [2024-11-05 04:35:09.700248] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.700257] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.700264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700268] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216e690) 00:23:56.289 [2024-11-05 04:35:09.700275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.289 [2024-11-05 04:35:09.700285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 4, qid 0 00:23:56.289 [2024-11-05 04:35:09.700495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.289 [2024-11-05 04:35:09.700501] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.289 [2024-11-05 04:35:09.700505] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700508] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216e690): datao=0, datal=4096, cccid=4 00:23:56.289 [2024-11-05 04:35:09.700513] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0700) on tqpair(0x216e690): expected_datao=0, payload_size=4096 00:23:56.289 [2024-11-05 04:35:09.700517] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700533] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700537] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.289 [2024-11-05 04:35:09.700694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.289 [2024-11-05 04:35:09.700697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700701] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x216e690 00:23:56.289 [2024-11-05 04:35:09.700713] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.700722] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.700729] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.700733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216e690) 00:23:56.289 [2024-11-05 04:35:09.700739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.289 [2024-11-05 04:35:09.704756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 4, qid 0 00:23:56.289 [2024-11-05 04:35:09.704947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.289 [2024-11-05 04:35:09.704954] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.289 [2024-11-05 04:35:09.704958] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.704961] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216e690): datao=0, datal=4096, cccid=4 00:23:56.289 [2024-11-05 04:35:09.704966] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0700) on tqpair(0x216e690): expected_datao=0, payload_size=4096 00:23:56.289 [2024-11-05 04:35:09.704970] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.704977] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.704980] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.705178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.289 [2024-11-05 04:35:09.705184] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.289 [2024-11-05 04:35:09.705187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.705191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x216e690 00:23:56.289 [2024-11-05 04:35:09.705199] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.705207] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.705216] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.705222] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.705227] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.705233] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.705238] nvme_ctrlr.c:3111:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:56.289 [2024-11-05 04:35:09.705242] nvme_ctrlr.c:1534:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:56.289 [2024-11-05 04:35:09.705248] nvme_ctrlr.c:1540:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:56.289 [2024-11-05 04:35:09.705261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.705265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216e690) 00:23:56.289 [2024-11-05 04:35:09.705272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.289 [2024-11-05 04:35:09.705279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.705283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.705286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x216e690) 00:23:56.289 [2024-11-05 04:35:09.705292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:56.289 [2024-11-05 04:35:09.705306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 4, qid 0 00:23:56.289 [2024-11-05 04:35:09.705311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0880, cid 5, qid 0 00:23:56.289 [2024-11-05 04:35:09.705522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.289 [2024-11-05 04:35:09.705528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.289 [2024-11-05 04:35:09.705531] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.705538] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x216e690 00:23:56.289 [2024-11-05 04:35:09.705544] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.289 [2024-11-05 04:35:09.705550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.289 [2024-11-05 04:35:09.705554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.705557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0880) on tqpair=0x216e690 00:23:56.289 [2024-11-05 04:35:09.705567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.289 [2024-11-05 04:35:09.705571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x216e690) 00:23:56.290 [2024-11-05 04:35:09.705577] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.290 [2024-11-05 04:35:09.705587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0880, cid 5, qid 0 00:23:56.290 [2024-11-05 04:35:09.705789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.290 [2024-11-05 04:35:09.705796] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.290 [2024-11-05 04:35:09.705800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.705803] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0880) on tqpair=0x216e690 00:23:56.290 [2024-11-05 04:35:09.705813] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.705817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x216e690) 00:23:56.290 [2024-11-05 04:35:09.705823] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.290 [2024-11-05 04:35:09.705833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0880, cid 5, qid 0 00:23:56.290 [2024-11-05 04:35:09.706009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.290 [2024-11-05 04:35:09.706015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.290 [2024-11-05 04:35:09.706019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0880) on tqpair=0x216e690 00:23:56.290 [2024-11-05 04:35:09.706031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x216e690) 00:23:56.290 [2024-11-05 04:35:09.706042] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.290 [2024-11-05 04:35:09.706051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0880, cid 5, qid 0 00:23:56.290 [2024-11-05 04:35:09.706244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.290 [2024-11-05 04:35:09.706251] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.290 [2024-11-05 04:35:09.706254] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0880) on tqpair=0x216e690 00:23:56.290 [2024-11-05 04:35:09.706272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706276] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x216e690) 00:23:56.290 [2024-11-05 04:35:09.706282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.290 [2024-11-05 04:35:09.706290] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706293] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x216e690) 00:23:56.290 [2024-11-05 04:35:09.706300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.290 [2024-11-05 04:35:09.706309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x216e690) 00:23:56.290 [2024-11-05 04:35:09.706319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.290 [2024-11-05 04:35:09.706328] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x216e690) 00:23:56.290 [2024-11-05 04:35:09.706338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.290 [2024-11-05 04:35:09.706349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0880, cid 5, qid 0 00:23:56.290 [2024-11-05 04:35:09.706355] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0700, cid 4, qid 0 00:23:56.290 [2024-11-05 04:35:09.706359] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0a00, cid 6, qid 0 00:23:56.290 [2024-11-05 04:35:09.706364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 7, qid 0 00:23:56.290 [2024-11-05 04:35:09.706613] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.290 [2024-11-05 04:35:09.706619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.290 [2024-11-05 04:35:09.706623] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706626] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216e690): datao=0, datal=8192, cccid=5 00:23:56.290 [2024-11-05 04:35:09.706631] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0880) on tqpair(0x216e690): expected_datao=0, payload_size=8192 00:23:56.290 [2024-11-05 04:35:09.706635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706736] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706741] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.290 [2024-11-05 04:35:09.706755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.290 [2024-11-05 04:35:09.706759] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706762] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216e690): datao=0, datal=512, cccid=4 00:23:56.290 [2024-11-05 04:35:09.706767] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0700) on tqpair(0x216e690): expected_datao=0, payload_size=512 00:23:56.290 [2024-11-05 04:35:09.706771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706778] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706781] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.290 [2024-11-05 04:35:09.706792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.290 [2024-11-05 04:35:09.706796] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706799] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216e690): datao=0, datal=512, cccid=6 00:23:56.290 [2024-11-05 04:35:09.706804] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0a00) on tqpair(0x216e690): expected_datao=0, payload_size=512 00:23:56.290 [2024-11-05 04:35:09.706808] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706814] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706818] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706823] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:56.290 [2024-11-05 04:35:09.706831] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:56.290 [2024-11-05 04:35:09.706835] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706838] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x216e690): datao=0, datal=4096, cccid=7 00:23:56.290 [2024-11-05 04:35:09.706842] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x21d0b80) on tqpair(0x216e690): expected_datao=0, payload_size=4096 00:23:56.290 [2024-11-05 04:35:09.706847] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706858] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706862] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.290 [2024-11-05 04:35:09.706879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.290 [2024-11-05 04:35:09.706882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0880) on tqpair=0x216e690 00:23:56.290 [2024-11-05 04:35:09.706901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.290 [2024-11-05 04:35:09.706907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.290 [2024-11-05 04:35:09.706910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0700) on tqpair=0x216e690 00:23:56.290 [2024-11-05 04:35:09.706925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.290 [2024-11-05 04:35:09.706930] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.290 [2024-11-05 04:35:09.706934] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706938] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0a00) on tqpair=0x216e690 00:23:56.290 [2024-11-05 04:35:09.706945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.290 [2024-11-05 04:35:09.706950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.290 [2024-11-05 04:35:09.706954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.290 [2024-11-05 04:35:09.706958] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x216e690 00:23:56.290 ===================================================== 00:23:56.290 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:56.290 ===================================================== 00:23:56.290 Controller Capabilities/Features 00:23:56.290 ================================ 00:23:56.290 Vendor ID: 8086 00:23:56.290 Subsystem Vendor ID: 8086 00:23:56.290 Serial Number: SPDK00000000000001 00:23:56.290 Model Number: SPDK bdev Controller 00:23:56.290 Firmware Version: 25.01 00:23:56.290 Recommended Arb Burst: 6 00:23:56.290 IEEE OUI Identifier: e4 d2 5c 00:23:56.290 Multi-path I/O 00:23:56.291 May have multiple subsystem ports: Yes 00:23:56.291 May have multiple controllers: Yes 00:23:56.291 Associated with SR-IOV VF: No 00:23:56.291 Max Data Transfer Size: 131072 00:23:56.291 Max Number of Namespaces: 32 00:23:56.291 Max Number of I/O Queues: 127 00:23:56.291 NVMe Specification Version (VS): 1.3 00:23:56.291 NVMe Specification Version (Identify): 1.3 00:23:56.291 Maximum Queue Entries: 128 00:23:56.291 Contiguous Queues Required: Yes 00:23:56.291 Arbitration Mechanisms Supported 00:23:56.291 Weighted Round Robin: Not Supported 00:23:56.291 Vendor Specific: Not Supported 00:23:56.291 Reset Timeout: 15000 ms 00:23:56.291 Doorbell Stride: 4 bytes 00:23:56.291 NVM Subsystem Reset: Not Supported 00:23:56.291 Command Sets Supported 00:23:56.291 NVM Command Set: Supported 00:23:56.291 Boot Partition: Not Supported 00:23:56.291 Memory Page Size Minimum: 4096 bytes 00:23:56.291 Memory Page Size Maximum: 4096 bytes 00:23:56.291 Persistent Memory Region: Not Supported 00:23:56.291 Optional Asynchronous Events Supported 00:23:56.291 Namespace Attribute Notices: Supported 00:23:56.291 Firmware Activation Notices: Not Supported 00:23:56.291 ANA Change Notices: Not Supported 00:23:56.291 PLE Aggregate Log Change Notices: Not Supported 00:23:56.291 LBA Status Info Alert Notices: Not Supported 00:23:56.291 EGE Aggregate Log Change Notices: Not Supported 00:23:56.291 Normal NVM Subsystem Shutdown event: Not Supported 00:23:56.291 Zone Descriptor Change Notices: Not Supported 00:23:56.291 Discovery Log Change Notices: Not Supported 00:23:56.291 Controller Attributes 00:23:56.291 128-bit Host Identifier: Supported 00:23:56.291 Non-Operational Permissive Mode: Not Supported 00:23:56.291 NVM Sets: Not Supported 00:23:56.291 Read Recovery Levels: Not Supported 00:23:56.291 Endurance Groups: Not Supported 00:23:56.291 Predictable Latency Mode: Not Supported 00:23:56.291 Traffic Based Keep ALive: Not Supported 00:23:56.291 Namespace Granularity: Not Supported 00:23:56.291 SQ Associations: Not Supported 00:23:56.291 UUID List: Not Supported 00:23:56.291 Multi-Domain Subsystem: Not Supported 00:23:56.291 Fixed Capacity Management: Not Supported 00:23:56.291 Variable Capacity Management: Not Supported 00:23:56.291 Delete Endurance Group: Not Supported 00:23:56.291 Delete NVM Set: Not Supported 00:23:56.291 Extended LBA Formats Supported: Not Supported 00:23:56.291 Flexible Data Placement Supported: Not Supported 00:23:56.291 00:23:56.291 Controller Memory Buffer Support 00:23:56.291 ================================ 00:23:56.291 Supported: No 00:23:56.291 00:23:56.291 Persistent Memory Region Support 00:23:56.291 ================================ 00:23:56.291 Supported: No 00:23:56.291 00:23:56.291 Admin Command Set Attributes 00:23:56.291 ============================ 00:23:56.291 Security Send/Receive: Not Supported 00:23:56.291 Format NVM: Not Supported 00:23:56.291 Firmware Activate/Download: Not Supported 00:23:56.291 Namespace Management: Not Supported 00:23:56.291 Device Self-Test: Not Supported 00:23:56.291 Directives: Not Supported 00:23:56.291 NVMe-MI: Not Supported 00:23:56.291 Virtualization Management: Not Supported 00:23:56.291 Doorbell Buffer Config: Not Supported 00:23:56.291 Get LBA Status Capability: Not Supported 00:23:56.291 Command & Feature Lockdown Capability: Not Supported 00:23:56.291 Abort Command Limit: 4 00:23:56.291 Async Event Request Limit: 4 00:23:56.291 Number of Firmware Slots: N/A 00:23:56.291 Firmware Slot 1 Read-Only: N/A 00:23:56.291 Firmware Activation Without Reset: N/A 00:23:56.291 Multiple Update Detection Support: N/A 00:23:56.291 Firmware Update Granularity: No Information Provided 00:23:56.291 Per-Namespace SMART Log: No 00:23:56.291 Asymmetric Namespace Access Log Page: Not Supported 00:23:56.291 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:56.291 Command Effects Log Page: Supported 00:23:56.291 Get Log Page Extended Data: Supported 00:23:56.291 Telemetry Log Pages: Not Supported 00:23:56.291 Persistent Event Log Pages: Not Supported 00:23:56.291 Supported Log Pages Log Page: May Support 00:23:56.291 Commands Supported & Effects Log Page: Not Supported 00:23:56.291 Feature Identifiers & Effects Log Page:May Support 00:23:56.291 NVMe-MI Commands & Effects Log Page: May Support 00:23:56.291 Data Area 4 for Telemetry Log: Not Supported 00:23:56.291 Error Log Page Entries Supported: 128 00:23:56.291 Keep Alive: Supported 00:23:56.291 Keep Alive Granularity: 10000 ms 00:23:56.291 00:23:56.291 NVM Command Set Attributes 00:23:56.291 ========================== 00:23:56.291 Submission Queue Entry Size 00:23:56.291 Max: 64 00:23:56.291 Min: 64 00:23:56.291 Completion Queue Entry Size 00:23:56.291 Max: 16 00:23:56.291 Min: 16 00:23:56.291 Number of Namespaces: 32 00:23:56.291 Compare Command: Supported 00:23:56.291 Write Uncorrectable Command: Not Supported 00:23:56.291 Dataset Management Command: Supported 00:23:56.291 Write Zeroes Command: Supported 00:23:56.291 Set Features Save Field: Not Supported 00:23:56.291 Reservations: Supported 00:23:56.291 Timestamp: Not Supported 00:23:56.291 Copy: Supported 00:23:56.291 Volatile Write Cache: Present 00:23:56.291 Atomic Write Unit (Normal): 1 00:23:56.291 Atomic Write Unit (PFail): 1 00:23:56.291 Atomic Compare & Write Unit: 1 00:23:56.291 Fused Compare & Write: Supported 00:23:56.291 Scatter-Gather List 00:23:56.291 SGL Command Set: Supported 00:23:56.291 SGL Keyed: Supported 00:23:56.291 SGL Bit Bucket Descriptor: Not Supported 00:23:56.291 SGL Metadata Pointer: Not Supported 00:23:56.291 Oversized SGL: Not Supported 00:23:56.291 SGL Metadata Address: Not Supported 00:23:56.291 SGL Offset: Supported 00:23:56.291 Transport SGL Data Block: Not Supported 00:23:56.291 Replay Protected Memory Block: Not Supported 00:23:56.291 00:23:56.291 Firmware Slot Information 00:23:56.291 ========================= 00:23:56.291 Active slot: 1 00:23:56.291 Slot 1 Firmware Revision: 25.01 00:23:56.291 00:23:56.291 00:23:56.291 Commands Supported and Effects 00:23:56.291 ============================== 00:23:56.291 Admin Commands 00:23:56.291 -------------- 00:23:56.291 Get Log Page (02h): Supported 00:23:56.291 Identify (06h): Supported 00:23:56.291 Abort (08h): Supported 00:23:56.291 Set Features (09h): Supported 00:23:56.291 Get Features (0Ah): Supported 00:23:56.291 Asynchronous Event Request (0Ch): Supported 00:23:56.291 Keep Alive (18h): Supported 00:23:56.291 I/O Commands 00:23:56.291 ------------ 00:23:56.291 Flush (00h): Supported LBA-Change 00:23:56.291 Write (01h): Supported LBA-Change 00:23:56.291 Read (02h): Supported 00:23:56.291 Compare (05h): Supported 00:23:56.291 Write Zeroes (08h): Supported LBA-Change 00:23:56.291 Dataset Management (09h): Supported LBA-Change 00:23:56.291 Copy (19h): Supported LBA-Change 00:23:56.291 00:23:56.291 Error Log 00:23:56.291 ========= 00:23:56.291 00:23:56.291 Arbitration 00:23:56.291 =========== 00:23:56.291 Arbitration Burst: 1 00:23:56.291 00:23:56.291 Power Management 00:23:56.291 ================ 00:23:56.291 Number of Power States: 1 00:23:56.291 Current Power State: Power State #0 00:23:56.291 Power State #0: 00:23:56.291 Max Power: 0.00 W 00:23:56.291 Non-Operational State: Operational 00:23:56.291 Entry Latency: Not Reported 00:23:56.291 Exit Latency: Not Reported 00:23:56.291 Relative Read Throughput: 0 00:23:56.291 Relative Read Latency: 0 00:23:56.291 Relative Write Throughput: 0 00:23:56.291 Relative Write Latency: 0 00:23:56.291 Idle Power: Not Reported 00:23:56.291 Active Power: Not Reported 00:23:56.291 Non-Operational Permissive Mode: Not Supported 00:23:56.291 00:23:56.291 Health Information 00:23:56.291 ================== 00:23:56.291 Critical Warnings: 00:23:56.291 Available Spare Space: OK 00:23:56.291 Temperature: OK 00:23:56.291 Device Reliability: OK 00:23:56.291 Read Only: No 00:23:56.291 Volatile Memory Backup: OK 00:23:56.291 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:56.291 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:56.291 Available Spare: 0% 00:23:56.291 Available Spare Threshold: 0% 00:23:56.291 Life Percentage Used:[2024-11-05 04:35:09.707053] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.291 [2024-11-05 04:35:09.707059] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x216e690) 00:23:56.291 [2024-11-05 04:35:09.707066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.291 [2024-11-05 04:35:09.707078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0b80, cid 7, qid 0 00:23:56.292 [2024-11-05 04:35:09.707240] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.292 [2024-11-05 04:35:09.707247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.292 [2024-11-05 04:35:09.707250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.707254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0b80) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.707286] nvme_ctrlr.c:4363:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:56.292 [2024-11-05 04:35:09.707295] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0100) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.707302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.292 [2024-11-05 04:35:09.707307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0280) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.707311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.292 [2024-11-05 04:35:09.707316] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0400) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.707323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.292 [2024-11-05 04:35:09.707328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0580) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.707332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:56.292 [2024-11-05 04:35:09.707340] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.707344] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.707348] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216e690) 00:23:56.292 [2024-11-05 04:35:09.707355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.292 [2024-11-05 04:35:09.707367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0580, cid 3, qid 0 00:23:56.292 [2024-11-05 04:35:09.707537] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.292 [2024-11-05 04:35:09.707543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.292 [2024-11-05 04:35:09.707547] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.707551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0580) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.707557] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.707561] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.707565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216e690) 00:23:56.292 [2024-11-05 04:35:09.707572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.292 [2024-11-05 04:35:09.707584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0580, cid 3, qid 0 00:23:56.292 [2024-11-05 04:35:09.707807] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.292 [2024-11-05 04:35:09.707814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.292 [2024-11-05 04:35:09.707818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.707822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0580) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.707826] nvme_ctrlr.c:1124:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:56.292 [2024-11-05 04:35:09.707831] nvme_ctrlr.c:1127:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:56.292 [2024-11-05 04:35:09.707840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.707844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.707848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216e690) 00:23:56.292 [2024-11-05 04:35:09.707855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.292 [2024-11-05 04:35:09.707865] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0580, cid 3, qid 0 00:23:56.292 [2024-11-05 04:35:09.708024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.292 [2024-11-05 04:35:09.708030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.292 [2024-11-05 04:35:09.708034] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708038] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0580) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.708048] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708055] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216e690) 00:23:56.292 [2024-11-05 04:35:09.708064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.292 [2024-11-05 04:35:09.708074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0580, cid 3, qid 0 00:23:56.292 [2024-11-05 04:35:09.708234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.292 [2024-11-05 04:35:09.708240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.292 [2024-11-05 04:35:09.708243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0580) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.708257] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216e690) 00:23:56.292 [2024-11-05 04:35:09.708271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.292 [2024-11-05 04:35:09.708281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0580, cid 3, qid 0 00:23:56.292 [2024-11-05 04:35:09.708459] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.292 [2024-11-05 04:35:09.708465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.292 [2024-11-05 04:35:09.708469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0580) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.708482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708490] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216e690) 00:23:56.292 [2024-11-05 04:35:09.708497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.292 [2024-11-05 04:35:09.708507] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0580, cid 3, qid 0 00:23:56.292 [2024-11-05 04:35:09.708688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.292 [2024-11-05 04:35:09.708694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.292 [2024-11-05 04:35:09.708698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708702] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0580) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.708711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.708719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x216e690) 00:23:56.292 [2024-11-05 04:35:09.708725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:56.292 [2024-11-05 04:35:09.708735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x21d0580, cid 3, qid 0 00:23:56.292 [2024-11-05 04:35:09.712754] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:56.292 [2024-11-05 04:35:09.712762] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:56.292 [2024-11-05 04:35:09.712766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:56.292 [2024-11-05 04:35:09.712770] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x21d0580) on tqpair=0x216e690 00:23:56.292 [2024-11-05 04:35:09.712778] nvme_ctrlr.c:1246:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:56.292 0% 00:23:56.292 Data Units Read: 0 00:23:56.292 Data Units Written: 0 00:23:56.292 Host Read Commands: 0 00:23:56.292 Host Write Commands: 0 00:23:56.292 Controller Busy Time: 0 minutes 00:23:56.292 Power Cycles: 0 00:23:56.292 Power On Hours: 0 hours 00:23:56.292 Unsafe Shutdowns: 0 00:23:56.292 Unrecoverable Media Errors: 0 00:23:56.292 Lifetime Error Log Entries: 0 00:23:56.292 Warning Temperature Time: 0 minutes 00:23:56.292 Critical Temperature Time: 0 minutes 00:23:56.292 00:23:56.292 Number of Queues 00:23:56.292 ================ 00:23:56.292 Number of I/O Submission Queues: 127 00:23:56.292 Number of I/O Completion Queues: 127 00:23:56.292 00:23:56.292 Active Namespaces 00:23:56.292 ================= 00:23:56.292 Namespace ID:1 00:23:56.292 Error Recovery Timeout: Unlimited 00:23:56.292 Command Set Identifier: NVM (00h) 00:23:56.292 Deallocate: Supported 00:23:56.292 Deallocated/Unwritten Error: Not Supported 00:23:56.292 Deallocated Read Value: Unknown 00:23:56.293 Deallocate in Write Zeroes: Not Supported 00:23:56.293 Deallocated Guard Field: 0xFFFF 00:23:56.293 Flush: Supported 00:23:56.293 Reservation: Supported 00:23:56.293 Namespace Sharing Capabilities: Multiple Controllers 00:23:56.293 Size (in LBAs): 131072 (0GiB) 00:23:56.293 Capacity (in LBAs): 131072 (0GiB) 00:23:56.293 Utilization (in LBAs): 131072 (0GiB) 00:23:56.293 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:56.293 EUI64: ABCDEF0123456789 00:23:56.293 UUID: 381e392d-8bce-4658-a6ae-63e4c82161f4 00:23:56.293 Thin Provisioning: Not Supported 00:23:56.293 Per-NS Atomic Units: Yes 00:23:56.293 Atomic Boundary Size (Normal): 0 00:23:56.293 Atomic Boundary Size (PFail): 0 00:23:56.293 Atomic Boundary Offset: 0 00:23:56.293 Maximum Single Source Range Length: 65535 00:23:56.293 Maximum Copy Length: 65535 00:23:56.293 Maximum Source Range Count: 1 00:23:56.293 NGUID/EUI64 Never Reused: No 00:23:56.293 Namespace Write Protected: No 00:23:56.293 Number of LBA Formats: 1 00:23:56.293 Current LBA Format: LBA Format #00 00:23:56.293 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:56.293 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:56.293 rmmod nvme_tcp 00:23:56.293 rmmod nvme_fabrics 00:23:56.293 rmmod nvme_keyring 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 3082700 ']' 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 3082700 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 3082700 ']' 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 3082700 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3082700 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3082700' 00:23:56.293 killing process with pid 3082700 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 3082700 00:23:56.293 04:35:09 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 3082700 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:56.554 04:35:10 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.469 04:35:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:58.469 00:23:58.469 real 0m11.357s 00:23:58.469 user 0m8.227s 00:23:58.469 sys 0m5.956s 00:23:58.469 04:35:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:58.469 04:35:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:58.469 ************************************ 00:23:58.469 END TEST nvmf_identify 00:23:58.469 ************************************ 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.731 ************************************ 00:23:58.731 START TEST nvmf_perf 00:23:58.731 ************************************ 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:58.731 * Looking for test storage... 00:23:58.731 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:58.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.731 --rc genhtml_branch_coverage=1 00:23:58.731 --rc genhtml_function_coverage=1 00:23:58.731 --rc genhtml_legend=1 00:23:58.731 --rc geninfo_all_blocks=1 00:23:58.731 --rc geninfo_unexecuted_blocks=1 00:23:58.731 00:23:58.731 ' 00:23:58.731 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:58.731 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.731 --rc genhtml_branch_coverage=1 00:23:58.731 --rc genhtml_function_coverage=1 00:23:58.731 --rc genhtml_legend=1 00:23:58.731 --rc geninfo_all_blocks=1 00:23:58.731 --rc geninfo_unexecuted_blocks=1 00:23:58.732 00:23:58.732 ' 00:23:58.732 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:58.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.732 --rc genhtml_branch_coverage=1 00:23:58.732 --rc genhtml_function_coverage=1 00:23:58.732 --rc genhtml_legend=1 00:23:58.732 --rc geninfo_all_blocks=1 00:23:58.732 --rc geninfo_unexecuted_blocks=1 00:23:58.732 00:23:58.732 ' 00:23:58.732 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:58.732 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.732 --rc genhtml_branch_coverage=1 00:23:58.732 --rc genhtml_function_coverage=1 00:23:58.732 --rc genhtml_legend=1 00:23:58.732 --rc geninfo_all_blocks=1 00:23:58.732 --rc geninfo_unexecuted_blocks=1 00:23:58.732 00:23:58.732 ' 00:23:58.732 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.993 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.994 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:58.994 04:35:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:05.587 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:05.587 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:05.587 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:05.587 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:05.587 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:05.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.423 ms 00:24:05.849 00:24:05.849 --- 10.0.0.2 ping statistics --- 00:24:05.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.849 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:05.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:24:05.849 00:24:05.849 --- 10.0.0.1 ping statistics --- 00:24:05.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.849 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:05.849 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=3087061 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 3087061 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 3087061 ']' 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:06.110 04:35:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:06.110 [2024-11-05 04:35:19.585353] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:24:06.110 [2024-11-05 04:35:19.585424] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.110 [2024-11-05 04:35:19.669338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.110 [2024-11-05 04:35:19.711831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.110 [2024-11-05 04:35:19.711868] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.110 [2024-11-05 04:35:19.711876] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:06.110 [2024-11-05 04:35:19.711883] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:06.110 [2024-11-05 04:35:19.711889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.110 [2024-11-05 04:35:19.713779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.110 [2024-11-05 04:35:19.713944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:06.110 [2024-11-05 04:35:19.714077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.110 [2024-11-05 04:35:19.714077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.055 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:07.055 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:24:07.055 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:07.055 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:07.055 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:07.055 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.055 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:07.055 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:07.316 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:07.316 04:35:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:07.576 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:24:07.576 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.838 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:07.838 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:24:07.838 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:07.838 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:07.838 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:08.099 [2024-11-05 04:35:21.489679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.099 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:08.100 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:08.100 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.360 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:08.360 04:35:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:08.622 04:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.622 [2024-11-05 04:35:22.232418] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.882 04:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:08.882 04:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:24:08.882 04:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:08.882 04:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:08.882 04:35:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:24:10.264 Initializing NVMe Controllers 00:24:10.264 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:24:10.264 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:24:10.264 Initialization complete. Launching workers. 00:24:10.264 ======================================================== 00:24:10.264 Latency(us) 00:24:10.264 Device Information : IOPS MiB/s Average min max 00:24:10.264 PCIE (0000:65:00.0) NSID 1 from core 0: 78861.34 308.05 405.20 13.28 5270.50 00:24:10.264 ======================================================== 00:24:10.264 Total : 78861.34 308.05 405.20 13.28 5270.50 00:24:10.264 00:24:10.265 04:35:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:11.648 Initializing NVMe Controllers 00:24:11.648 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:11.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:11.648 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:11.648 Initialization complete. Launching workers. 00:24:11.648 ======================================================== 00:24:11.648 Latency(us) 00:24:11.648 Device Information : IOPS MiB/s Average min max 00:24:11.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 0.31 12710.72 287.86 45577.99 00:24:11.648 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 18669.73 7965.68 47903.97 00:24:11.648 ======================================================== 00:24:11.648 Total : 136.00 0.53 15164.43 287.86 47903.97 00:24:11.648 00:24:11.648 04:35:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.030 Initializing NVMe Controllers 00:24:13.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:13.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:13.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:13.030 Initialization complete. Launching workers. 00:24:13.030 ======================================================== 00:24:13.030 Latency(us) 00:24:13.030 Device Information : IOPS MiB/s Average min max 00:24:13.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10537.97 41.16 3075.57 489.66 45580.91 00:24:13.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3827.62 14.95 8409.27 6904.24 16076.47 00:24:13.030 ======================================================== 00:24:13.030 Total : 14365.59 56.12 4496.70 489.66 45580.91 00:24:13.030 00:24:13.030 04:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:13.030 04:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:13.030 04:35:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:15.572 Initializing NVMe Controllers 00:24:15.573 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.573 Controller IO queue size 128, less than required. 00:24:15.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.573 Controller IO queue size 128, less than required. 00:24:15.573 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:15.573 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:15.573 Initialization complete. Launching workers. 00:24:15.573 ======================================================== 00:24:15.573 Latency(us) 00:24:15.573 Device Information : IOPS MiB/s Average min max 00:24:15.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1731.50 432.87 74759.78 48263.61 120761.05 00:24:15.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.99 145.25 227287.13 70963.31 357516.06 00:24:15.573 ======================================================== 00:24:15.573 Total : 2312.49 578.12 113080.92 48263.61 357516.06 00:24:15.573 00:24:15.573 04:35:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:15.833 No valid NVMe controllers or AIO or URING devices found 00:24:15.833 Initializing NVMe Controllers 00:24:15.833 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:15.833 Controller IO queue size 128, less than required. 00:24:15.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.833 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:15.833 Controller IO queue size 128, less than required. 00:24:15.833 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:15.833 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:15.833 WARNING: Some requested NVMe devices were skipped 00:24:15.833 04:35:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:18.374 Initializing NVMe Controllers 00:24:18.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:18.374 Controller IO queue size 128, less than required. 00:24:18.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:18.374 Controller IO queue size 128, less than required. 00:24:18.374 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:18.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:18.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:18.374 Initialization complete. Launching workers. 00:24:18.374 00:24:18.374 ==================== 00:24:18.374 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:18.374 TCP transport: 00:24:18.374 polls: 20134 00:24:18.374 idle_polls: 10299 00:24:18.374 sock_completions: 9835 00:24:18.374 nvme_completions: 6451 00:24:18.374 submitted_requests: 9670 00:24:18.374 queued_requests: 1 00:24:18.374 00:24:18.374 ==================== 00:24:18.374 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:18.374 TCP transport: 00:24:18.374 polls: 19547 00:24:18.374 idle_polls: 10209 00:24:18.374 sock_completions: 9338 00:24:18.375 nvme_completions: 6735 00:24:18.375 submitted_requests: 10156 00:24:18.375 queued_requests: 1 00:24:18.375 ======================================================== 00:24:18.375 Latency(us) 00:24:18.375 Device Information : IOPS MiB/s Average min max 00:24:18.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1609.81 402.45 81238.22 47735.01 130144.35 00:24:18.375 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1680.69 420.17 76873.68 41188.06 134893.98 00:24:18.375 ======================================================== 00:24:18.375 Total : 3290.50 822.62 79008.94 41188.06 134893.98 00:24:18.375 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:18.375 04:35:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:18.375 rmmod nvme_tcp 00:24:18.375 rmmod nvme_fabrics 00:24:18.375 rmmod nvme_keyring 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 3087061 ']' 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 3087061 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 3087061 ']' 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 3087061 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3087061 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3087061' 00:24:18.635 killing process with pid 3087061 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 3087061 00:24:18.635 04:35:32 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 3087061 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:20.546 04:35:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.088 00:24:23.088 real 0m23.988s 00:24:23.088 user 0m58.824s 00:24:23.088 sys 0m8.165s 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:23.088 ************************************ 00:24:23.088 END TEST nvmf_perf 00:24:23.088 ************************************ 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.088 ************************************ 00:24:23.088 START TEST nvmf_fio_host 00:24:23.088 ************************************ 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:23.088 * Looking for test storage... 00:24:23.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.088 --rc genhtml_branch_coverage=1 00:24:23.088 --rc genhtml_function_coverage=1 00:24:23.088 --rc genhtml_legend=1 00:24:23.088 --rc geninfo_all_blocks=1 00:24:23.088 --rc geninfo_unexecuted_blocks=1 00:24:23.088 00:24:23.088 ' 00:24:23.088 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:23.088 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.089 --rc genhtml_branch_coverage=1 00:24:23.089 --rc genhtml_function_coverage=1 00:24:23.089 --rc genhtml_legend=1 00:24:23.089 --rc geninfo_all_blocks=1 00:24:23.089 --rc geninfo_unexecuted_blocks=1 00:24:23.089 00:24:23.089 ' 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:23.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.089 --rc genhtml_branch_coverage=1 00:24:23.089 --rc genhtml_function_coverage=1 00:24:23.089 --rc genhtml_legend=1 00:24:23.089 --rc geninfo_all_blocks=1 00:24:23.089 --rc geninfo_unexecuted_blocks=1 00:24:23.089 00:24:23.089 ' 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:23.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.089 --rc genhtml_branch_coverage=1 00:24:23.089 --rc genhtml_function_coverage=1 00:24:23.089 --rc genhtml_legend=1 00:24:23.089 --rc geninfo_all_blocks=1 00:24:23.089 --rc geninfo_unexecuted_blocks=1 00:24:23.089 00:24:23.089 ' 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.089 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.089 04:35:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:31.230 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:31.230 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:31.230 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:31.230 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.230 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.231 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.231 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.647 ms 00:24:31.231 00:24:31.231 --- 10.0.0.2 ping statistics --- 00:24:31.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.231 rtt min/avg/max/mdev = 0.647/0.647/0.647/0.000 ms 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.231 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.231 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:24:31.231 00:24:31.231 --- 10.0.0.1 ping statistics --- 00:24:31.231 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.231 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3094117 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3094117 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 3094117 ']' 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.231 [2024-11-05 04:35:43.781074] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:24:31.231 [2024-11-05 04:35:43.781127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.231 [2024-11-05 04:35:43.852675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:31.231 [2024-11-05 04:35:43.889924] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.231 [2024-11-05 04:35:43.889962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.231 [2024-11-05 04:35:43.889970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.231 [2024-11-05 04:35:43.889977] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.231 [2024-11-05 04:35:43.889983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.231 [2024-11-05 04:35:43.891643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:31.231 [2024-11-05 04:35:43.891769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.231 [2024-11-05 04:35:43.891868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.231 [2024-11-05 04:35:43.891870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:24:31.231 04:35:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:31.231 [2024-11-05 04:35:44.136466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.231 04:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:31.231 04:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:31.231 04:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.231 04:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:31.231 Malloc1 00:24:31.231 04:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:31.231 04:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:31.231 04:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:31.492 [2024-11-05 04:35:44.928547] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.492 04:35:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:31.753 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:31.754 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:31.754 04:35:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:32.022 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:32.022 fio-3.35 00:24:32.022 Starting 1 thread 00:24:34.672 00:24:34.672 test: (groupid=0, jobs=1): err= 0: pid=3094647: Tue Nov 5 04:35:47 2024 00:24:34.672 read: IOPS=9981, BW=39.0MiB/s (40.9MB/s)(78.1MiB/2004msec) 00:24:34.672 slat (usec): min=2, max=288, avg= 2.19, stdev= 2.91 00:24:34.672 clat (usec): min=3842, max=9155, avg=7055.92, stdev=947.79 00:24:34.672 lat (usec): min=3844, max=9157, avg=7058.10, stdev=947.69 00:24:34.672 clat percentiles (usec): 00:24:34.672 | 1.00th=[ 4621], 5.00th=[ 5014], 10.00th=[ 5276], 20.00th=[ 6587], 00:24:34.672 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7308], 60.00th=[ 7439], 00:24:34.672 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8160], 00:24:34.672 | 99.00th=[ 8586], 99.50th=[ 8586], 99.90th=[ 8848], 99.95th=[ 8848], 00:24:34.672 | 99.99th=[ 9110] 00:24:34.672 bw ( KiB/s): min=36752, max=45576, per=99.82%, avg=39854.00, stdev=3914.15, samples=4 00:24:34.672 iops : min= 9188, max=11394, avg=9963.50, stdev=978.54, samples=4 00:24:34.672 write: IOPS=9995, BW=39.0MiB/s (40.9MB/s)(78.2MiB/2004msec); 0 zone resets 00:24:34.672 slat (usec): min=2, max=284, avg= 2.26, stdev= 2.20 00:24:34.672 clat (usec): min=2914, max=8141, avg=5668.25, stdev=758.62 00:24:34.672 lat (usec): min=2949, max=8143, avg=5670.52, stdev=758.56 00:24:34.672 clat percentiles (usec): 00:24:34.672 | 1.00th=[ 3752], 5.00th=[ 4015], 10.00th=[ 4228], 20.00th=[ 5276], 00:24:34.672 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 5997], 00:24:34.672 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6390], 95.00th=[ 6587], 00:24:34.672 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 7177], 99.95th=[ 7308], 00:24:34.672 | 99.99th=[ 8094] 00:24:34.672 bw ( KiB/s): min=37704, max=45904, per=99.94%, avg=39958.00, stdev=3972.88, samples=4 00:24:34.672 iops : min= 9426, max=11476, avg=9989.50, stdev=993.22, samples=4 00:24:34.672 lat (msec) : 4=2.31%, 10=97.69% 00:24:34.672 cpu : usr=72.79%, sys=26.01%, ctx=26, majf=0, minf=17 00:24:34.672 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:34.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:34.672 issued rwts: total=20002,20031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.672 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:34.672 00:24:34.672 Run status group 0 (all jobs): 00:24:34.672 READ: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=78.1MiB (81.9MB), run=2004-2004msec 00:24:34.672 WRITE: bw=39.0MiB/s (40.9MB/s), 39.0MiB/s-39.0MiB/s (40.9MB/s-40.9MB/s), io=78.2MiB (82.0MB), run=2004-2004msec 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:34.672 04:35:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:34.672 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:34.672 fio-3.35 00:24:34.672 Starting 1 thread 00:24:37.267 00:24:37.267 test: (groupid=0, jobs=1): err= 0: pid=3095474: Tue Nov 5 04:35:50 2024 00:24:37.267 read: IOPS=9475, BW=148MiB/s (155MB/s)(297MiB/2009msec) 00:24:37.267 slat (usec): min=3, max=110, avg= 3.61, stdev= 1.65 00:24:37.267 clat (usec): min=2333, max=18208, avg=8046.37, stdev=1987.57 00:24:37.267 lat (usec): min=2336, max=18212, avg=8049.98, stdev=1987.76 00:24:37.267 clat percentiles (usec): 00:24:37.267 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 5669], 20.00th=[ 6325], 00:24:37.267 | 30.00th=[ 6849], 40.00th=[ 7373], 50.00th=[ 7832], 60.00th=[ 8455], 00:24:37.267 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[11207], 00:24:37.267 | 99.00th=[13304], 99.50th=[14222], 99.90th=[17695], 99.95th=[17957], 00:24:37.267 | 99.99th=[18220] 00:24:37.267 bw ( KiB/s): min=68704, max=87392, per=49.69%, avg=75344.00, stdev=8421.96, samples=4 00:24:37.267 iops : min= 4294, max= 5462, avg=4709.00, stdev=526.37, samples=4 00:24:37.267 write: IOPS=5443, BW=85.1MiB/s (89.2MB/s)(153MiB/1802msec); 0 zone resets 00:24:37.267 slat (usec): min=39, max=398, avg=41.06, stdev= 8.10 00:24:37.267 clat (usec): min=3011, max=16505, avg=9331.06, stdev=1592.19 00:24:37.267 lat (usec): min=3051, max=16544, avg=9372.12, stdev=1594.10 00:24:37.267 clat percentiles (usec): 00:24:37.267 | 1.00th=[ 6390], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[ 8029], 00:24:37.267 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9503], 00:24:37.267 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11338], 95.00th=[12125], 00:24:37.267 | 99.00th=[14353], 99.50th=[15139], 99.90th=[15795], 99.95th=[16188], 00:24:37.267 | 99.99th=[16450] 00:24:37.267 bw ( KiB/s): min=71296, max=91136, per=89.80%, avg=78208.00, stdev=9148.79, samples=4 00:24:37.267 iops : min= 4456, max= 5696, avg=4888.00, stdev=571.80, samples=4 00:24:37.267 lat (msec) : 4=0.51%, 10=78.46%, 20=21.03% 00:24:37.267 cpu : usr=88.94%, sys=9.61%, ctx=42, majf=0, minf=27 00:24:37.267 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:37.267 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:37.267 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:37.267 issued rwts: total=19037,9809,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:37.267 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:37.267 00:24:37.267 Run status group 0 (all jobs): 00:24:37.267 READ: bw=148MiB/s (155MB/s), 148MiB/s-148MiB/s (155MB/s-155MB/s), io=297MiB (312MB), run=2009-2009msec 00:24:37.267 WRITE: bw=85.1MiB/s (89.2MB/s), 85.1MiB/s-85.1MiB/s (89.2MB/s-89.2MB/s), io=153MiB (161MB), run=1802-1802msec 00:24:37.267 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:37.538 04:35:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:37.538 rmmod nvme_tcp 00:24:37.538 rmmod nvme_fabrics 00:24:37.538 rmmod nvme_keyring 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 3094117 ']' 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 3094117 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 3094117 ']' 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 3094117 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3094117 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:37.538 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3094117' 00:24:37.539 killing process with pid 3094117 00:24:37.539 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 3094117 00:24:37.539 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 3094117 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:37.800 04:35:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:39.714 04:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:39.714 00:24:39.714 real 0m17.075s 00:24:39.714 user 1m4.062s 00:24:39.714 sys 0m7.393s 00:24:39.714 04:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:39.714 04:35:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.714 ************************************ 00:24:39.714 END TEST nvmf_fio_host 00:24:39.714 ************************************ 00:24:39.714 04:35:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:39.714 04:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:39.714 04:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:39.714 04:35:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.976 ************************************ 00:24:39.976 START TEST nvmf_failover 00:24:39.976 ************************************ 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:39.976 * Looking for test storage... 00:24:39.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:24:39.976 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.977 --rc genhtml_branch_coverage=1 00:24:39.977 --rc genhtml_function_coverage=1 00:24:39.977 --rc genhtml_legend=1 00:24:39.977 --rc geninfo_all_blocks=1 00:24:39.977 --rc geninfo_unexecuted_blocks=1 00:24:39.977 00:24:39.977 ' 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.977 --rc genhtml_branch_coverage=1 00:24:39.977 --rc genhtml_function_coverage=1 00:24:39.977 --rc genhtml_legend=1 00:24:39.977 --rc geninfo_all_blocks=1 00:24:39.977 --rc geninfo_unexecuted_blocks=1 00:24:39.977 00:24:39.977 ' 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.977 --rc genhtml_branch_coverage=1 00:24:39.977 --rc genhtml_function_coverage=1 00:24:39.977 --rc genhtml_legend=1 00:24:39.977 --rc geninfo_all_blocks=1 00:24:39.977 --rc geninfo_unexecuted_blocks=1 00:24:39.977 00:24:39.977 ' 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:39.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:39.977 --rc genhtml_branch_coverage=1 00:24:39.977 --rc genhtml_function_coverage=1 00:24:39.977 --rc genhtml_legend=1 00:24:39.977 --rc geninfo_all_blocks=1 00:24:39.977 --rc geninfo_unexecuted_blocks=1 00:24:39.977 00:24:39.977 ' 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:39.977 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:40.238 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:40.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:24:40.239 04:35:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:48.386 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:48.386 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:48.386 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:48.386 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:48.386 04:36:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:48.386 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:48.386 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.386 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.611 ms 00:24:48.386 00:24:48.386 --- 10.0.0.2 ping statistics --- 00:24:48.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.386 rtt min/avg/max/mdev = 0.611/0.611/0.611/0.000 ms 00:24:48.386 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:48.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.276 ms 00:24:48.386 00:24:48.386 --- 10.0.0.1 ping statistics --- 00:24:48.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.386 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:24:48.386 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.386 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:48.386 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:48.386 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.386 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:48.386 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=3100190 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 3100190 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3100190 ']' 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:48.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.387 [2024-11-05 04:36:01.132512] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:24:48.387 [2024-11-05 04:36:01.132570] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:48.387 [2024-11-05 04:36:01.227754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:48.387 [2024-11-05 04:36:01.271040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:48.387 [2024-11-05 04:36:01.271087] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:48.387 [2024-11-05 04:36:01.271096] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:48.387 [2024-11-05 04:36:01.271103] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:48.387 [2024-11-05 04:36:01.271109] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:48.387 [2024-11-05 04:36:01.272788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.387 [2024-11-05 04:36:01.273000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.387 [2024-11-05 04:36:01.273000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:48.387 04:36:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:48.648 [2024-11-05 04:36:02.130194] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.648 04:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:48.908 Malloc0 00:24:48.909 04:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:48.909 04:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:49.169 04:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.429 [2024-11-05 04:36:02.872152] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:49.429 04:36:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:49.429 [2024-11-05 04:36:03.044613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:49.689 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:49.689 [2024-11-05 04:36:03.217143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:49.689 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:49.689 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3100611 00:24:49.689 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:49.689 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3100611 /var/tmp/bdevperf.sock 00:24:49.689 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3100611 ']' 00:24:49.689 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:49.689 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:49.690 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:49.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:49.690 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:49.690 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:49.950 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:49.950 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:24:49.950 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:50.211 NVMe0n1 00:24:50.471 04:36:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:50.732 00:24:50.732 04:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3100886 00:24:50.732 04:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:50.732 04:36:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:51.673 04:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.673 [2024-11-05 04:36:05.307567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.673 [2024-11-05 04:36:05.307704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.674 [2024-11-05 04:36:05.307771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f84e0 is same with the state(6) to be set 00:24:51.934 04:36:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:55.233 04:36:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:55.233 00:24:55.233 04:36:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:55.494 [2024-11-05 04:36:08.955571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955947] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.494 [2024-11-05 04:36:08.955952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.495 [2024-11-05 04:36:08.955956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.495 [2024-11-05 04:36:08.955961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11f9030 is same with the state(6) to be set 00:24:55.495 04:36:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:58.818 04:36:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.818 [2024-11-05 04:36:12.147938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.818 04:36:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:59.759 04:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:59.759 [2024-11-05 04:36:13.340622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340897] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340933] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.759 [2024-11-05 04:36:13.340962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.760 [2024-11-05 04:36:13.340966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.760 [2024-11-05 04:36:13.340972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.760 [2024-11-05 04:36:13.340976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.760 [2024-11-05 04:36:13.340981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.760 [2024-11-05 04:36:13.340986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.760 [2024-11-05 04:36:13.340991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10be4e0 is same with the state(6) to be set 00:24:59.760 04:36:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3100886 00:25:06.345 { 00:25:06.345 "results": [ 00:25:06.345 { 00:25:06.345 "job": "NVMe0n1", 00:25:06.345 "core_mask": "0x1", 00:25:06.345 "workload": "verify", 00:25:06.345 "status": "finished", 00:25:06.345 "verify_range": { 00:25:06.345 "start": 0, 00:25:06.345 "length": 16384 00:25:06.345 }, 00:25:06.345 "queue_depth": 128, 00:25:06.345 "io_size": 4096, 00:25:06.345 "runtime": 15.005765, 00:25:06.345 "iops": 11185.567680154927, 00:25:06.345 "mibps": 43.693623750605184, 00:25:06.345 "io_failed": 8597, 00:25:06.345 "io_timeout": 0, 00:25:06.345 "avg_latency_us": 10858.210399690177, 00:25:06.345 "min_latency_us": 771.4133333333333, 00:25:06.345 "max_latency_us": 21299.2 00:25:06.345 } 00:25:06.345 ], 00:25:06.345 "core_count": 1 00:25:06.345 } 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3100611 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3100611 ']' 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3100611 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3100611 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3100611' 00:25:06.345 killing process with pid 3100611 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3100611 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3100611 00:25:06.345 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:06.345 [2024-11-05 04:36:03.291827] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:25:06.345 [2024-11-05 04:36:03.291913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3100611 ] 00:25:06.345 [2024-11-05 04:36:03.365055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.345 [2024-11-05 04:36:03.400755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.345 Running I/O for 15 seconds... 00:25:06.346 11150.00 IOPS, 43.55 MiB/s [2024-11-05T03:36:19.986Z] [2024-11-05 04:36:05.308179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.346 [2024-11-05 04:36:05.308213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308224] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.346 [2024-11-05 04:36:05.308233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.346 [2024-11-05 04:36:05.308251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.346 [2024-11-05 04:36:05.308267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5cd70 is same with the state(6) to be set 00:25:06.346 [2024-11-05 04:36:05.308342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:96016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:96040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:96048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:96056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:96072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:96080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:96112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:96128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:96160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:96176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:96208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:96224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:96232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:96248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:96272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.308984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.308991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:96312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:96352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:96360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:96376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:96384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:96400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:96464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:96480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:96504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.346 [2024-11-05 04:36:05.309437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:96512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.346 [2024-11-05 04:36:05.309444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:96520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:96528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.309985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.309995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:95760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:95784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:95816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:95824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:95856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:95864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:95872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:95896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.347 [2024-11-05 04:36:05.310401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.347 [2024-11-05 04:36:05.310521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.347 [2024-11-05 04:36:05.310547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.347 [2024-11-05 04:36:05.310554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96000 len:8 PRP1 0x0 PRP2 0x0 00:25:06.347 [2024-11-05 04:36:05.310564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.347 [2024-11-05 04:36:05.310602] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:06.347 [2024-11-05 04:36:05.310613] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:25:06.347 [2024-11-05 04:36:05.314166] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:25:06.347 [2024-11-05 04:36:05.314190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5cd70 (9): Bad file descriptor 00:25:06.347 [2024-11-05 04:36:05.394227] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:25:06.347 10763.00 IOPS, 42.04 MiB/s [2024-11-05T03:36:19.987Z] 10935.67 IOPS, 42.72 MiB/s [2024-11-05T03:36:19.987Z] 11084.75 IOPS, 43.30 MiB/s [2024-11-05T03:36:19.988Z] [2024-11-05 04:36:08.958101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:47368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:47376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:47384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:47392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:47472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:47480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:47496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:47512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:47528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:47536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:47544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:47552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.958991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.958998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.348 [2024-11-05 04:36:08.959177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.348 [2024-11-05 04:36:08.959184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.349 [2024-11-05 04:36:08.959200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.349 [2024-11-05 04:36:08.959217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.959989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.959999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.960006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.349 [2024-11-05 04:36:08.960022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48096 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960070] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48104 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48112 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48120 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960151] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48128 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48136 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48144 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48152 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960266] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48160 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48168 len:8 PRP1 0x0 PRP2 0x0 00:25:06.349 [2024-11-05 04:36:08.960310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.349 [2024-11-05 04:36:08.960318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.349 [2024-11-05 04:36:08.960324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.349 [2024-11-05 04:36:08.960330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48176 len:8 PRP1 0x0 PRP2 0x0 00:25:06.350 [2024-11-05 04:36:08.960338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.960345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.350 [2024-11-05 04:36:08.960351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.350 [2024-11-05 04:36:08.960357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48184 len:8 PRP1 0x0 PRP2 0x0 00:25:06.350 [2024-11-05 04:36:08.960364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.960372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.350 [2024-11-05 04:36:08.960378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.350 [2024-11-05 04:36:08.960384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48192 len:8 PRP1 0x0 PRP2 0x0 00:25:06.350 [2024-11-05 04:36:08.960391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.960399] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.350 [2024-11-05 04:36:08.960404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.350 [2024-11-05 04:36:08.960410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48200 len:8 PRP1 0x0 PRP2 0x0 00:25:06.350 [2024-11-05 04:36:08.960417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.960425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.350 [2024-11-05 04:36:08.971580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.350 [2024-11-05 04:36:08.971610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48208 len:8 PRP1 0x0 PRP2 0x0 00:25:06.350 [2024-11-05 04:36:08.971622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.971634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.350 [2024-11-05 04:36:08.971639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.350 [2024-11-05 04:36:08.971646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48216 len:8 PRP1 0x0 PRP2 0x0 00:25:06.350 [2024-11-05 04:36:08.971653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.971694] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:06.350 [2024-11-05 04:36:08.971726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.350 [2024-11-05 04:36:08.971735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.971752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.350 [2024-11-05 04:36:08.971760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.971776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.350 [2024-11-05 04:36:08.971783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.971791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.350 [2024-11-05 04:36:08.971798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:08.971806] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:25:06.350 [2024-11-05 04:36:08.971847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5cd70 (9): Bad file descriptor 00:25:06.350 [2024-11-05 04:36:08.975418] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:25:06.350 [2024-11-05 04:36:09.009210] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:25:06.350 11054.60 IOPS, 43.18 MiB/s [2024-11-05T03:36:19.990Z] 11145.17 IOPS, 43.54 MiB/s [2024-11-05T03:36:19.990Z] 11259.43 IOPS, 43.98 MiB/s [2024-11-05T03:36:19.990Z] 11309.50 IOPS, 44.18 MiB/s [2024-11-05T03:36:19.990Z] 11310.00 IOPS, 44.18 MiB/s [2024-11-05T03:36:19.990Z] [2024-11-05 04:36:13.344891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.350 [2024-11-05 04:36:13.344926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.344943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.344952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.344963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.344970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.344980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.344988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.344998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.350 [2024-11-05 04:36:13.345820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.350 [2024-11-05 04:36:13.345828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.345986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.345993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:06.351 [2024-11-05 04:36:13.346396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66872 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66880 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66888 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346497] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66896 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66904 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346550] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346556] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66912 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346577] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66920 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346606] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66928 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66936 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346665] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66944 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346692] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66952 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66960 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66968 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66976 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346799] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66984 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346825] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66992 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67000 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.351 [2024-11-05 04:36:13.346878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.351 [2024-11-05 04:36:13.346883] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.351 [2024-11-05 04:36:13.346890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67008 len:8 PRP1 0x0 PRP2 0x0 00:25:06.351 [2024-11-05 04:36:13.346897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.346904] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.346910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.346916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67016 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.346923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.346930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.346936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.346942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67024 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.346949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.346957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.346962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.346968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67032 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.346975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.346984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.346990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.346996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67040 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67048 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347039] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67056 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67064 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347093] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67072 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67080 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67088 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347178] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67096 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67104 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347233] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67112 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67120 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67128 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67136 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67144 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347358] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.347370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67152 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.347377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.347385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.347390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.357130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67160 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.357159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.357179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.357186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.357193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67168 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.357201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.357209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.357215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.357221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67176 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.357228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.357236] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.357241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.357248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67184 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.357255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.357262] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:06.352 [2024-11-05 04:36:13.357268] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:06.352 [2024-11-05 04:36:13.357274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67192 len:8 PRP1 0x0 PRP2 0x0 00:25:06.352 [2024-11-05 04:36:13.357281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.357325] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:06.352 [2024-11-05 04:36:13.357356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.352 [2024-11-05 04:36:13.357365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.357375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.352 [2024-11-05 04:36:13.357383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.357391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.352 [2024-11-05 04:36:13.357398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.357406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:06.352 [2024-11-05 04:36:13.357413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:06.352 [2024-11-05 04:36:13.357422] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:25:06.352 [2024-11-05 04:36:13.357452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5cd70 (9): Bad file descriptor 00:25:06.352 [2024-11-05 04:36:13.361007] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:25:06.352 [2024-11-05 04:36:13.492139] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:25:06.352 11153.80 IOPS, 43.57 MiB/s [2024-11-05T03:36:19.992Z] 11160.45 IOPS, 43.60 MiB/s [2024-11-05T03:36:19.992Z] 11169.00 IOPS, 43.63 MiB/s [2024-11-05T03:36:19.992Z] 11166.46 IOPS, 43.62 MiB/s [2024-11-05T03:36:19.992Z] 11192.21 IOPS, 43.72 MiB/s 00:25:06.352 Latency(us) 00:25:06.352 [2024-11-05T03:36:19.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.352 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:06.352 Verification LBA range: start 0x0 length 0x4000 00:25:06.352 NVMe0n1 : 15.01 11185.57 43.69 572.91 0.00 10858.21 771.41 21299.20 00:25:06.352 [2024-11-05T03:36:19.992Z] =================================================================================================================== 00:25:06.352 [2024-11-05T03:36:19.992Z] Total : 11185.57 43.69 572.91 0.00 10858.21 771.41 21299.20 00:25:06.352 Received shutdown signal, test time was about 15.000000 seconds 00:25:06.352 00:25:06.352 Latency(us) 00:25:06.352 [2024-11-05T03:36:19.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:06.352 [2024-11-05T03:36:19.992Z] =================================================================================================================== 00:25:06.352 [2024-11-05T03:36:19.992Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3104243 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3104243 /var/tmp/bdevperf.sock 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 3104243 ']' 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:06.352 04:36:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:06.922 04:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:06.923 04:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:06.923 04:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:06.923 [2024-11-05 04:36:20.493185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:06.923 04:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:07.184 [2024-11-05 04:36:20.669597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:07.184 04:36:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:07.445 NVMe0n1 00:25:07.445 04:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:08.016 00:25:08.016 04:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:08.278 00:25:08.278 04:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:08.278 04:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:08.278 04:36:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:08.539 04:36:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:11.843 04:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:11.843 04:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:11.843 04:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3105433 00:25:11.843 04:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:11.843 04:36:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3105433 00:25:12.785 { 00:25:12.785 "results": [ 00:25:12.785 { 00:25:12.785 "job": "NVMe0n1", 00:25:12.785 "core_mask": "0x1", 00:25:12.785 "workload": "verify", 00:25:12.785 "status": "finished", 00:25:12.785 "verify_range": { 00:25:12.785 "start": 0, 00:25:12.785 "length": 16384 00:25:12.785 }, 00:25:12.785 "queue_depth": 128, 00:25:12.785 "io_size": 4096, 00:25:12.785 "runtime": 1.010128, 00:25:12.785 "iops": 11087.70373655616, 00:25:12.785 "mibps": 43.3113427209225, 00:25:12.785 "io_failed": 0, 00:25:12.785 "io_timeout": 0, 00:25:12.785 "avg_latency_us": 11490.643504761905, 00:25:12.785 "min_latency_us": 2744.32, 00:25:12.785 "max_latency_us": 15510.186666666666 00:25:12.785 } 00:25:12.785 ], 00:25:12.785 "core_count": 1 00:25:12.785 } 00:25:12.785 04:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:12.785 [2024-11-05 04:36:19.546090] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:25:12.785 [2024-11-05 04:36:19.546149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3104243 ] 00:25:12.785 [2024-11-05 04:36:19.617021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.785 [2024-11-05 04:36:19.652938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:12.785 [2024-11-05 04:36:22.039646] bdev_nvme.c:2035:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:12.785 [2024-11-05 04:36:22.039692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.785 [2024-11-05 04:36:22.039704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.785 [2024-11-05 04:36:22.039714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.785 [2024-11-05 04:36:22.039722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.785 [2024-11-05 04:36:22.039731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.785 [2024-11-05 04:36:22.039738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.786 [2024-11-05 04:36:22.039750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:12.786 [2024-11-05 04:36:22.039758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:12.786 [2024-11-05 04:36:22.039765] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:25:12.786 [2024-11-05 04:36:22.039795] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:25:12.786 [2024-11-05 04:36:22.039811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x152ad70 (9): Bad file descriptor 00:25:12.786 [2024-11-05 04:36:22.085637] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:25:12.786 Running I/O for 1 seconds... 00:25:12.786 11072.00 IOPS, 43.25 MiB/s 00:25:12.786 Latency(us) 00:25:12.786 [2024-11-05T03:36:26.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.786 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:12.786 Verification LBA range: start 0x0 length 0x4000 00:25:12.786 NVMe0n1 : 1.01 11087.70 43.31 0.00 0.00 11490.64 2744.32 15510.19 00:25:12.786 [2024-11-05T03:36:26.426Z] =================================================================================================================== 00:25:12.786 [2024-11-05T03:36:26.426Z] Total : 11087.70 43.31 0.00 0.00 11490.64 2744.32 15510.19 00:25:12.786 04:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.786 04:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:13.046 04:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.307 04:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:13.307 04:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:13.308 04:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:13.569 04:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3104243 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3104243 ']' 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3104243 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3104243 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3104243' 00:25:16.887 killing process with pid 3104243 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3104243 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3104243 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:16.887 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:17.148 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:17.148 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:17.148 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:17.148 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:17.148 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:25:17.148 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:17.148 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:25:17.148 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:17.148 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:17.148 rmmod nvme_tcp 00:25:17.148 rmmod nvme_fabrics 00:25:17.149 rmmod nvme_keyring 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 3100190 ']' 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 3100190 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 3100190 ']' 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 3100190 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:17.149 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3100190 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3100190' 00:25:17.410 killing process with pid 3100190 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 3100190 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 3100190 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.410 04:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:19.955 00:25:19.955 real 0m39.631s 00:25:19.955 user 2m1.485s 00:25:19.955 sys 0m8.462s 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:19.955 ************************************ 00:25:19.955 END TEST nvmf_failover 00:25:19.955 ************************************ 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.955 ************************************ 00:25:19.955 START TEST nvmf_host_discovery 00:25:19.955 ************************************ 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:19.955 * Looking for test storage... 00:25:19.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:25:19.955 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:19.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.956 --rc genhtml_branch_coverage=1 00:25:19.956 --rc genhtml_function_coverage=1 00:25:19.956 --rc genhtml_legend=1 00:25:19.956 --rc geninfo_all_blocks=1 00:25:19.956 --rc geninfo_unexecuted_blocks=1 00:25:19.956 00:25:19.956 ' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:19.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.956 --rc genhtml_branch_coverage=1 00:25:19.956 --rc genhtml_function_coverage=1 00:25:19.956 --rc genhtml_legend=1 00:25:19.956 --rc geninfo_all_blocks=1 00:25:19.956 --rc geninfo_unexecuted_blocks=1 00:25:19.956 00:25:19.956 ' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:19.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.956 --rc genhtml_branch_coverage=1 00:25:19.956 --rc genhtml_function_coverage=1 00:25:19.956 --rc genhtml_legend=1 00:25:19.956 --rc geninfo_all_blocks=1 00:25:19.956 --rc geninfo_unexecuted_blocks=1 00:25:19.956 00:25:19.956 ' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:19.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.956 --rc genhtml_branch_coverage=1 00:25:19.956 --rc genhtml_function_coverage=1 00:25:19.956 --rc genhtml_legend=1 00:25:19.956 --rc geninfo_all_blocks=1 00:25:19.956 --rc geninfo_unexecuted_blocks=1 00:25:19.956 00:25:19.956 ' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:19.956 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:25:19.956 04:36:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:26.548 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.548 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:25:26.548 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:26.548 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:26.548 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:26.548 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:26.548 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:26.548 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:26.809 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:26.809 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.809 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:26.810 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:26.810 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:26.810 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.070 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.070 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.070 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:27.070 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:27.070 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.070 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.467 ms 00:25:27.070 00:25:27.070 --- 10.0.0.2 ping statistics --- 00:25:27.070 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.070 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:25:27.070 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.070 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.070 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:25:27.070 00:25:27.070 --- 10.0.0.1 ping statistics --- 00:25:27.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.071 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=3110571 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 3110571 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3110571 ']' 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:27.071 04:36:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.071 [2024-11-05 04:36:40.597644] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:25:27.071 [2024-11-05 04:36:40.597715] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.071 [2024-11-05 04:36:40.698219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.332 [2024-11-05 04:36:40.748627] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.332 [2024-11-05 04:36:40.748680] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.332 [2024-11-05 04:36:40.748688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.332 [2024-11-05 04:36:40.748695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.332 [2024-11-05 04:36:40.748702] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.332 [2024-11-05 04:36:40.749459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.903 [2024-11-05 04:36:41.457488] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.903 [2024-11-05 04:36:41.469712] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.903 null0 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.903 null1 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3110812 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3110812 /tmp/host.sock 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 3110812 ']' 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:27.903 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:27.903 04:36:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.164 [2024-11-05 04:36:41.574610] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:25:28.164 [2024-11-05 04:36:41.574673] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3110812 ] 00:25:28.164 [2024-11-05 04:36:41.649826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.164 [2024-11-05 04:36:41.691596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:28.734 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:28.734 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:25:28.734 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:28.734 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:28.734 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.735 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.735 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.735 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:28.735 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.735 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:28.995 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.256 [2024-11-05 04:36:42.652691] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:25:29.256 04:36:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:29.828 [2024-11-05 04:36:43.427719] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:29.828 [2024-11-05 04:36:43.427739] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:29.828 [2024-11-05 04:36:43.427756] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:30.088 [2024-11-05 04:36:43.557172] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:30.088 [2024-11-05 04:36:43.656027] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:30.088 [2024-11-05 04:36:43.657155] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe45730:1 started. 00:25:30.088 [2024-11-05 04:36:43.658771] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:30.088 [2024-11-05 04:36:43.658791] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:30.088 [2024-11-05 04:36:43.665150] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe45730 was disconnected and freed. delete nvme_qpair. 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:30.348 04:36:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:30.609 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.610 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.610 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.610 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.610 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.610 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.870 [2024-11-05 04:36:44.315839] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xe45ae0:1 started. 00:25:30.870 [2024-11-05 04:36:44.326992] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xe45ae0 was disconnected and freed. delete nvme_qpair. 00:25:30.870 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.870 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:30.870 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:30.870 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:30.870 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:30.870 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:30.870 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:30.870 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:30.870 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.871 [2024-11-05 04:36:44.409513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:30.871 [2024-11-05 04:36:44.410427] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:30.871 [2024-11-05 04:36:44.410448] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:30.871 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:31.132 [2024-11-05 04:36:44.538288] bdev_nvme.c:7215:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:31.132 04:36:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:25:31.132 [2024-11-05 04:36:44.600911] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:25:31.132 [2024-11-05 04:36:44.600947] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:31.132 [2024-11-05 04:36:44.600955] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:31.132 [2024-11-05 04:36:44.600961] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:32.072 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:32.072 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:32.072 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:32.072 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:32.072 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.073 [2024-11-05 04:36:45.673054] bdev_nvme.c:7273:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:32.073 [2024-11-05 04:36:45.673075] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:32.073 [2024-11-05 04:36:45.680390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.073 [2024-11-05 04:36:45.680409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.073 [2024-11-05 04:36:45.680419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.073 [2024-11-05 04:36:45.680427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.073 [2024-11-05 04:36:45.680436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.073 [2024-11-05 04:36:45.680447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.073 [2024-11-05 04:36:45.680456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:32.073 [2024-11-05 04:36:45.680464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:32.073 [2024-11-05 04:36:45.680472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe15e10 is same with the state(6) to be set 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.073 [2024-11-05 04:36:45.690403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe15e10 (9): Bad file descriptor 00:25:32.073 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.073 [2024-11-05 04:36:45.700438] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:32.073 [2024-11-05 04:36:45.700450] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:32.073 [2024-11-05 04:36:45.700455] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:32.073 [2024-11-05 04:36:45.700460] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:32.073 [2024-11-05 04:36:45.700478] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:32.073 [2024-11-05 04:36:45.700773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.073 [2024-11-05 04:36:45.700789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe15e10 with addr=10.0.0.2, port=4420 00:25:32.073 [2024-11-05 04:36:45.700797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe15e10 is same with the state(6) to be set 00:25:32.073 [2024-11-05 04:36:45.700810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe15e10 (9): Bad file descriptor 00:25:32.073 [2024-11-05 04:36:45.700828] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:32.073 [2024-11-05 04:36:45.700836] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:32.073 [2024-11-05 04:36:45.700844] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:32.073 [2024-11-05 04:36:45.700851] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:32.073 [2024-11-05 04:36:45.700857] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:32.073 [2024-11-05 04:36:45.700866] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:32.073 [2024-11-05 04:36:45.710509] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:32.073 [2024-11-05 04:36:45.710520] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:32.073 [2024-11-05 04:36:45.710525] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:32.073 [2024-11-05 04:36:45.710533] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:32.073 [2024-11-05 04:36:45.710547] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:32.335 [2024-11-05 04:36:45.711010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.335 [2024-11-05 04:36:45.711049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe15e10 with addr=10.0.0.2, port=4420 00:25:32.335 [2024-11-05 04:36:45.711061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe15e10 is same with the state(6) to be set 00:25:32.335 [2024-11-05 04:36:45.711079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe15e10 (9): Bad file descriptor 00:25:32.335 [2024-11-05 04:36:45.711092] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:32.335 [2024-11-05 04:36:45.711098] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:32.335 [2024-11-05 04:36:45.711106] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:32.335 [2024-11-05 04:36:45.711114] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:32.335 [2024-11-05 04:36:45.711119] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:32.335 [2024-11-05 04:36:45.711132] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:32.335 [2024-11-05 04:36:45.720579] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:32.335 [2024-11-05 04:36:45.720595] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:32.335 [2024-11-05 04:36:45.720600] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:32.335 [2024-11-05 04:36:45.720605] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:32.335 [2024-11-05 04:36:45.720622] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:32.335 [2024-11-05 04:36:45.720817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.335 [2024-11-05 04:36:45.720832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe15e10 with addr=10.0.0.2, port=4420 00:25:32.335 [2024-11-05 04:36:45.720840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe15e10 is same with the state(6) to be set 00:25:32.335 [2024-11-05 04:36:45.720852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe15e10 (9): Bad file descriptor 00:25:32.335 [2024-11-05 04:36:45.720863] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:32.335 [2024-11-05 04:36:45.720870] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:32.335 [2024-11-05 04:36:45.720877] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:32.335 [2024-11-05 04:36:45.720884] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:32.335 [2024-11-05 04:36:45.720888] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:32.336 [2024-11-05 04:36:45.720904] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:32.336 [2024-11-05 04:36:45.730653] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:32.336 [2024-11-05 04:36:45.730670] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:32.336 [2024-11-05 04:36:45.730675] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:32.336 [2024-11-05 04:36:45.730679] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:32.336 [2024-11-05 04:36:45.730694] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:32.336 [2024-11-05 04:36:45.731024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.336 [2024-11-05 04:36:45.731036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe15e10 with addr=10.0.0.2, port=4420 00:25:32.336 [2024-11-05 04:36:45.731044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe15e10 is same with the state(6) to be set 00:25:32.336 [2024-11-05 04:36:45.731056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe15e10 (9): Bad file descriptor 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:32.336 [2024-11-05 04:36:45.731066] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:32.336 [2024-11-05 04:36:45.731073] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:32.336 [2024-11-05 04:36:45.731080] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:32.336 [2024-11-05 04:36:45.731086] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:32.336 [2024-11-05 04:36:45.731091] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:32.336 [2024-11-05 04:36:45.731100] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.336 [2024-11-05 04:36:45.740726] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:32.336 [2024-11-05 04:36:45.740738] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:32.336 [2024-11-05 04:36:45.740743] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:32.336 [2024-11-05 04:36:45.740751] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:32.336 [2024-11-05 04:36:45.740765] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:32.336 [2024-11-05 04:36:45.740987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.336 [2024-11-05 04:36:45.741006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe15e10 with addr=10.0.0.2, port=4420 00:25:32.336 [2024-11-05 04:36:45.741013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe15e10 is same with the state(6) to be set 00:25:32.336 [2024-11-05 04:36:45.741024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe15e10 (9): Bad file descriptor 00:25:32.336 [2024-11-05 04:36:45.741885] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:32.336 [2024-11-05 04:36:45.741897] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:32.336 [2024-11-05 04:36:45.741905] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:32.336 [2024-11-05 04:36:45.741911] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:32.336 [2024-11-05 04:36:45.741916] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:32.336 [2024-11-05 04:36:45.741932] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:32.336 [2024-11-05 04:36:45.750796] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:32.336 [2024-11-05 04:36:45.750808] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:32.336 [2024-11-05 04:36:45.750812] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:32.336 [2024-11-05 04:36:45.750817] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:32.336 [2024-11-05 04:36:45.750831] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:32.336 [2024-11-05 04:36:45.751060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.336 [2024-11-05 04:36:45.751072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe15e10 with addr=10.0.0.2, port=4420 00:25:32.336 [2024-11-05 04:36:45.751079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe15e10 is same with the state(6) to be set 00:25:32.336 [2024-11-05 04:36:45.751091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe15e10 (9): Bad file descriptor 00:25:32.336 [2024-11-05 04:36:45.751107] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:32.336 [2024-11-05 04:36:45.751115] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:32.336 [2024-11-05 04:36:45.751122] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:32.336 [2024-11-05 04:36:45.751128] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:32.336 [2024-11-05 04:36:45.751133] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:32.336 [2024-11-05 04:36:45.751141] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:32.336 [2024-11-05 04:36:45.759135] bdev_nvme.c:7078:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:32.336 [2024-11-05 04:36:45.759153] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:32.336 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:32.337 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:32.597 04:36:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:32.597 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:32.598 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:25:32.598 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:25:32.598 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:32.598 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:32.598 04:36:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.638 [2024-11-05 04:36:47.125686] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:33.638 [2024-11-05 04:36:47.125704] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:33.638 [2024-11-05 04:36:47.125716] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:33.638 [2024-11-05 04:36:47.212975] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:33.900 [2024-11-05 04:36:47.484505] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:25:33.900 [2024-11-05 04:36:47.485279] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xe22a40:1 started. 00:25:33.900 [2024-11-05 04:36:47.487090] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:33.900 [2024-11-05 04:36:47.487118] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:33.900 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:33.900 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.900 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:33.900 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.900 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:33.900 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.900 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:33.900 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.901 [2024-11-05 04:36:47.494598] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xe22a40 was disconnected and freed. delete nvme_qpair. 00:25:33.901 request: 00:25:33.901 { 00:25:33.901 "name": "nvme", 00:25:33.901 "trtype": "tcp", 00:25:33.901 "traddr": "10.0.0.2", 00:25:33.901 "adrfam": "ipv4", 00:25:33.901 "trsvcid": "8009", 00:25:33.901 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:33.901 "wait_for_attach": true, 00:25:33.901 "method": "bdev_nvme_start_discovery", 00:25:33.901 "req_id": 1 00:25:33.901 } 00:25:33.901 Got JSON-RPC error response 00:25:33.901 response: 00:25:33.901 { 00:25:33.901 "code": -17, 00:25:33.901 "message": "File exists" 00:25:33.901 } 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:33.901 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.161 request: 00:25:34.161 { 00:25:34.161 "name": "nvme_second", 00:25:34.161 "trtype": "tcp", 00:25:34.161 "traddr": "10.0.0.2", 00:25:34.161 "adrfam": "ipv4", 00:25:34.161 "trsvcid": "8009", 00:25:34.161 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:34.161 "wait_for_attach": true, 00:25:34.161 "method": "bdev_nvme_start_discovery", 00:25:34.161 "req_id": 1 00:25:34.161 } 00:25:34.161 Got JSON-RPC error response 00:25:34.161 response: 00:25:34.161 { 00:25:34.161 "code": -17, 00:25:34.161 "message": "File exists" 00:25:34.161 } 00:25:34.161 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:34.162 04:36:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:35.544 [2024-11-05 04:36:48.750563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.544 [2024-11-05 04:36:48.750592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4fb60 with addr=10.0.0.2, port=8010 00:25:35.544 [2024-11-05 04:36:48.750604] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:35.544 [2024-11-05 04:36:48.750612] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:35.544 [2024-11-05 04:36:48.750620] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:36.120 [2024-11-05 04:36:49.752961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:36.120 [2024-11-05 04:36:49.752985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe4fb60 with addr=10.0.0.2, port=8010 00:25:36.120 [2024-11-05 04:36:49.752996] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:36.120 [2024-11-05 04:36:49.753003] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:36.120 [2024-11-05 04:36:49.753009] bdev_nvme.c:7359:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:37.508 [2024-11-05 04:36:50.754940] bdev_nvme.c:7334:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:37.508 request: 00:25:37.508 { 00:25:37.508 "name": "nvme_second", 00:25:37.508 "trtype": "tcp", 00:25:37.508 "traddr": "10.0.0.2", 00:25:37.508 "adrfam": "ipv4", 00:25:37.508 "trsvcid": "8010", 00:25:37.508 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:37.508 "wait_for_attach": false, 00:25:37.508 "attach_timeout_ms": 3000, 00:25:37.508 "method": "bdev_nvme_start_discovery", 00:25:37.508 "req_id": 1 00:25:37.508 } 00:25:37.508 Got JSON-RPC error response 00:25:37.508 response: 00:25:37.508 { 00:25:37.508 "code": -110, 00:25:37.508 "message": "Connection timed out" 00:25:37.508 } 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3110812 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:37.508 rmmod nvme_tcp 00:25:37.508 rmmod nvme_fabrics 00:25:37.508 rmmod nvme_keyring 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 3110571 ']' 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 3110571 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 3110571 ']' 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 3110571 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3110571 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3110571' 00:25:37.508 killing process with pid 3110571 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 3110571 00:25:37.508 04:36:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 3110571 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:37.508 04:36:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:40.052 00:25:40.052 real 0m20.036s 00:25:40.052 user 0m23.475s 00:25:40.052 sys 0m6.973s 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:40.052 ************************************ 00:25:40.052 END TEST nvmf_host_discovery 00:25:40.052 ************************************ 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.052 ************************************ 00:25:40.052 START TEST nvmf_host_multipath_status 00:25:40.052 ************************************ 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:40.052 * Looking for test storage... 00:25:40.052 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:40.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.052 --rc genhtml_branch_coverage=1 00:25:40.052 --rc genhtml_function_coverage=1 00:25:40.052 --rc genhtml_legend=1 00:25:40.052 --rc geninfo_all_blocks=1 00:25:40.052 --rc geninfo_unexecuted_blocks=1 00:25:40.052 00:25:40.052 ' 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:40.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.052 --rc genhtml_branch_coverage=1 00:25:40.052 --rc genhtml_function_coverage=1 00:25:40.052 --rc genhtml_legend=1 00:25:40.052 --rc geninfo_all_blocks=1 00:25:40.052 --rc geninfo_unexecuted_blocks=1 00:25:40.052 00:25:40.052 ' 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:40.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.052 --rc genhtml_branch_coverage=1 00:25:40.052 --rc genhtml_function_coverage=1 00:25:40.052 --rc genhtml_legend=1 00:25:40.052 --rc geninfo_all_blocks=1 00:25:40.052 --rc geninfo_unexecuted_blocks=1 00:25:40.052 00:25:40.052 ' 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:40.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.052 --rc genhtml_branch_coverage=1 00:25:40.052 --rc genhtml_function_coverage=1 00:25:40.052 --rc genhtml_legend=1 00:25:40.052 --rc geninfo_all_blocks=1 00:25:40.052 --rc geninfo_unexecuted_blocks=1 00:25:40.052 00:25:40.052 ' 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.052 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.053 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:25:40.053 04:36:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.635 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:46.636 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:46.636 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:46.636 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:46.636 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:46.636 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.896 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.896 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.896 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:46.896 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:46.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.532 ms 00:25:46.896 00:25:46.896 --- 10.0.0.2 ping statistics --- 00:25:46.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.896 rtt min/avg/max/mdev = 0.532/0.532/0.532/0.000 ms 00:25:46.896 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:25:46.896 00:25:46.896 --- 10.0.0.1 ping statistics --- 00:25:46.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.896 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:25:46.896 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.896 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:25:46.896 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:46.896 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=3116996 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 3116996 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3116996 ']' 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:46.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:46.897 04:37:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:46.897 [2024-11-05 04:37:00.459105] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:25:46.897 [2024-11-05 04:37:00.459163] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.157 [2024-11-05 04:37:00.539868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:47.157 [2024-11-05 04:37:00.580869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.157 [2024-11-05 04:37:00.580908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.157 [2024-11-05 04:37:00.580916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.157 [2024-11-05 04:37:00.580926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.157 [2024-11-05 04:37:00.580932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.157 [2024-11-05 04:37:00.582280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.158 [2024-11-05 04:37:00.582282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.727 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:47.727 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:47.727 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:47.727 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.727 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:47.727 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.727 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3116996 00:25:47.727 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:47.988 [2024-11-05 04:37:01.460992] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.988 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:48.248 Malloc0 00:25:48.248 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:48.248 04:37:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:48.509 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.509 [2024-11-05 04:37:02.139034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:48.770 [2024-11-05 04:37:02.291402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3117362 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3117362 /var/tmp/bdevperf.sock 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 3117362 ']' 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:48.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:48.770 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:49.031 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:49.031 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:25:49.031 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:49.290 04:37:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:49.550 Nvme0n1 00:25:49.550 04:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:49.810 Nvme0n1 00:25:50.070 04:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:50.070 04:37:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:51.980 04:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:51.980 04:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:52.240 04:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:52.240 04:37:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:53.623 04:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:53.623 04:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:53.623 04:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.623 04:37:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:53.623 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.623 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:53.623 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.623 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:53.623 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:53.623 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:53.623 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.623 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.883 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.883 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.883 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.883 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:54.143 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.143 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:54.143 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:54.143 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.403 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.403 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:54.403 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:54.404 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.404 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.404 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:54.404 04:37:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.664 04:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:54.924 04:37:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:55.868 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:55.868 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:55.868 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:55.868 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.129 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.129 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:56.129 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.129 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.129 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.129 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.129 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.129 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:56.390 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.390 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:56.390 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.390 04:37:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:56.651 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.651 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:56.651 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.651 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:56.651 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.651 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:56.651 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.651 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:56.911 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.911 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:56.911 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:57.171 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:57.171 04:37:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:58.556 04:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:58.556 04:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:58.556 04:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.556 04:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:58.556 04:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.556 04:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:58.556 04:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.556 04:37:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:58.556 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:58.556 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:58.556 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.556 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:58.817 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.817 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:58.817 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:58.817 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:59.077 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.077 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:59.077 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.077 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:59.077 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.077 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:59.077 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:59.077 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:59.338 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:59.338 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:59.338 04:37:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:59.599 04:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:59.599 04:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.981 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.242 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.242 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.242 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.242 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.503 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.503 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.503 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.503 04:37:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.764 04:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.764 04:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:01.764 04:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.764 04:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:01.764 04:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:01.764 04:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:01.764 04:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:02.024 04:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:02.025 04:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:03.408 04:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:03.408 04:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:03.408 04:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.408 04:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:03.408 04:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.408 04:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:03.408 04:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.408 04:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:03.408 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.408 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:03.408 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.408 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:03.668 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.668 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:03.668 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.668 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:03.929 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:03.929 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:03.929 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.929 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:03.929 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:03.929 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:03.929 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:03.929 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:04.190 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.190 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:04.190 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:04.450 04:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:04.450 04:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:05.832 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:05.832 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:05.832 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.832 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:05.832 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:05.832 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:05.833 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.833 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:05.833 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.833 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:05.833 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.833 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:06.094 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.094 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:06.094 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:06.094 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.365 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.365 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:06.365 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.365 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:06.365 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.365 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:06.365 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.365 04:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.626 04:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.626 04:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:06.887 04:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:06.887 04:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:06.887 04:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:07.147 04:37:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:08.087 04:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:08.087 04:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:08.087 04:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.087 04:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:08.348 04:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.348 04:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:08.348 04:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.348 04:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:08.608 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.608 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:08.608 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.608 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:08.869 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.869 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:08.869 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.869 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:08.869 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:08.869 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:08.869 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:08.869 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:09.130 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.130 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:09.130 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:09.130 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:09.390 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:09.390 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:09.391 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:09.391 04:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:09.651 04:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:10.591 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:10.591 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:10.591 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.591 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:10.852 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:10.852 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:10.852 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:10.852 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:11.113 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.114 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:11.114 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.114 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:11.114 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.114 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:11.114 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.114 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:11.375 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.375 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:11.375 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.375 04:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:11.635 04:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.635 04:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:11.635 04:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:11.635 04:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:11.636 04:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:11.636 04:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:11.636 04:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:11.897 04:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:12.158 04:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:13.102 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:13.102 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:13.102 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.102 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:13.363 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.363 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:13.363 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:13.363 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.363 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.363 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:13.363 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.363 04:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:13.624 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.624 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:13.624 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.624 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:13.886 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:13.886 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:13.886 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:13.886 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:14.147 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.147 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:14.147 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:14.147 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:14.148 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:14.148 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:14.148 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:14.409 04:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:14.671 04:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:15.613 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:15.613 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:15.613 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.613 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:15.874 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:15.874 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:15.874 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.874 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:15.874 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:15.874 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:15.874 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:15.874 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:16.136 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.136 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:16.136 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.136 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:16.404 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.404 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:16.404 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.404 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:16.404 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:16.404 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:16.404 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:16.404 04:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3117362 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3117362 ']' 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3117362 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3117362 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3117362' 00:26:16.665 killing process with pid 3117362 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3117362 00:26:16.665 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3117362 00:26:16.665 { 00:26:16.665 "results": [ 00:26:16.665 { 00:26:16.665 "job": "Nvme0n1", 00:26:16.665 "core_mask": "0x4", 00:26:16.665 "workload": "verify", 00:26:16.665 "status": "terminated", 00:26:16.665 "verify_range": { 00:26:16.665 "start": 0, 00:26:16.665 "length": 16384 00:26:16.665 }, 00:26:16.665 "queue_depth": 128, 00:26:16.665 "io_size": 4096, 00:26:16.665 "runtime": 26.627773, 00:26:16.665 "iops": 10793.429852357536, 00:26:16.665 "mibps": 42.161835360771626, 00:26:16.665 "io_failed": 0, 00:26:16.665 "io_timeout": 0, 00:26:16.665 "avg_latency_us": 11841.780362624171, 00:26:16.665 "min_latency_us": 360.1066666666667, 00:26:16.665 "max_latency_us": 3019898.88 00:26:16.665 } 00:26:16.665 ], 00:26:16.665 "core_count": 1 00:26:16.665 } 00:26:16.930 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3117362 00:26:16.930 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:16.930 [2024-11-05 04:37:02.358497] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:26:16.930 [2024-11-05 04:37:02.358559] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3117362 ] 00:26:16.930 [2024-11-05 04:37:02.416732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:16.930 [2024-11-05 04:37:02.445836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:16.930 Running I/O for 90 seconds... 00:26:16.930 9576.00 IOPS, 37.41 MiB/s [2024-11-05T03:37:30.570Z] 9634.00 IOPS, 37.63 MiB/s [2024-11-05T03:37:30.570Z] 9653.00 IOPS, 37.71 MiB/s [2024-11-05T03:37:30.570Z] 9653.00 IOPS, 37.71 MiB/s [2024-11-05T03:37:30.570Z] 9943.20 IOPS, 38.84 MiB/s [2024-11-05T03:37:30.570Z] 10403.00 IOPS, 40.64 MiB/s [2024-11-05T03:37:30.570Z] 10785.71 IOPS, 42.13 MiB/s [2024-11-05T03:37:30.570Z] 10750.00 IOPS, 41.99 MiB/s [2024-11-05T03:37:30.570Z] 10629.56 IOPS, 41.52 MiB/s [2024-11-05T03:37:30.570Z] 10534.10 IOPS, 41.15 MiB/s [2024-11-05T03:37:30.570Z] 10465.91 IOPS, 40.88 MiB/s [2024-11-05T03:37:30.570Z] [2024-11-05 04:37:15.487048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.930 [2024-11-05 04:37:15.487085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.487360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.487365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.488845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.488863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.488879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.488895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.488911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.488930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.488948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.488964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.930 [2024-11-05 04:37:15.488980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:16.930 [2024-11-05 04:37:15.488991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.488996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.931 [2024-11-05 04:37:15.489620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:16.931 [2024-11-05 04:37:15.489632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:73096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:73112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:73120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:73152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:73160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:73168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:73176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:73184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.489941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.489946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:73200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:73208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:73224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:73232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:73240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:73248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:73256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:73264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:73272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:73280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:73288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:73296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:73304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:73312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:73320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:16.932 [2024-11-05 04:37:15.490792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:73328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.932 [2024-11-05 04:37:15.490797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.490842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:73336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.490848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.490864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.490868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.490883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:73352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.490889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.490904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:73360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.490909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.490923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:73368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.490928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.490943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:73376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.490949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.490963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:73384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.490968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.490984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:73392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.490989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:73400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:73408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:73416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:73424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:73440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:73448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:73456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:73464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:73480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:73488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:73496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:73512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:73520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:73528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:73536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:73544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:73552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:73560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:15.491425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:73568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:15.491430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:16.933 10313.08 IOPS, 40.29 MiB/s [2024-11-05T03:37:30.573Z] 9519.77 IOPS, 37.19 MiB/s [2024-11-05T03:37:30.573Z] 8839.79 IOPS, 34.53 MiB/s [2024-11-05T03:37:30.573Z] 8330.47 IOPS, 32.54 MiB/s [2024-11-05T03:37:30.573Z] 8617.50 IOPS, 33.66 MiB/s [2024-11-05T03:37:30.573Z] 8891.53 IOPS, 34.73 MiB/s [2024-11-05T03:37:30.573Z] 9329.78 IOPS, 36.44 MiB/s [2024-11-05T03:37:30.573Z] 9720.89 IOPS, 37.97 MiB/s [2024-11-05T03:37:30.573Z] 9958.05 IOPS, 38.90 MiB/s [2024-11-05T03:37:30.573Z] 10099.24 IOPS, 39.45 MiB/s [2024-11-05T03:37:30.573Z] 10240.77 IOPS, 40.00 MiB/s [2024-11-05T03:37:30.573Z] 10522.48 IOPS, 41.10 MiB/s [2024-11-05T03:37:30.573Z] 10783.50 IOPS, 42.12 MiB/s [2024-11-05T03:37:30.573Z] [2024-11-05 04:37:28.061664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:28.061702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:28.061734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:28.061740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:28.061756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.933 [2024-11-05 04:37:28.061762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:16.933 [2024-11-05 04:37:28.061777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:33560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.061782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.061793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.061798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.061808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:33592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.061813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.061824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.061829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.061839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.061844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.061855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:32824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.061860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.061961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:32856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.061969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.061980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.061986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.061996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:32920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.062002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:33640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.062018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.062033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.062048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.062067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:33664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.062082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.062098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.062113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:33712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.062129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:33728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.062145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:33744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:16.934 [2024-11-05 04:37:28.062161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.062391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.062407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:33016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.062422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:16.934 [2024-11-05 04:37:28.062433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:33048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.934 [2024-11-05 04:37:28.062438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:16.934 10865.76 IOPS, 42.44 MiB/s [2024-11-05T03:37:30.574Z] 10821.04 IOPS, 42.27 MiB/s [2024-11-05T03:37:30.574Z] Received shutdown signal, test time was about 26.628383 seconds 00:26:16.934 00:26:16.934 Latency(us) 00:26:16.934 [2024-11-05T03:37:30.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.934 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:16.934 Verification LBA range: start 0x0 length 0x4000 00:26:16.934 Nvme0n1 : 26.63 10793.43 42.16 0.00 0.00 11841.78 360.11 3019898.88 00:26:16.934 [2024-11-05T03:37:30.574Z] =================================================================================================================== 00:26:16.934 [2024-11-05T03:37:30.574Z] Total : 10793.43 42.16 0.00 0.00 11841.78 360.11 3019898.88 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:16.934 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:16.934 rmmod nvme_tcp 00:26:16.934 rmmod nvme_fabrics 00:26:17.196 rmmod nvme_keyring 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 3116996 ']' 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 3116996 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 3116996 ']' 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 3116996 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3116996 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3116996' 00:26:17.196 killing process with pid 3116996 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 3116996 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 3116996 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:17.196 04:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.818 04:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:19.818 00:26:19.818 real 0m39.651s 00:26:19.818 user 1m42.940s 00:26:19.818 sys 0m11.233s 00:26:19.818 04:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:19.818 04:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:19.818 ************************************ 00:26:19.818 END TEST nvmf_host_multipath_status 00:26:19.818 ************************************ 00:26:19.818 04:37:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:19.818 04:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:19.818 04:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:19.818 04:37:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.818 ************************************ 00:26:19.818 START TEST nvmf_discovery_remove_ifc 00:26:19.818 ************************************ 00:26:19.818 04:37:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:19.818 * Looking for test storage... 00:26:19.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:19.818 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:19.818 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:26:19.818 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:19.818 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:19.818 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:19.818 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:19.818 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.819 --rc genhtml_branch_coverage=1 00:26:19.819 --rc genhtml_function_coverage=1 00:26:19.819 --rc genhtml_legend=1 00:26:19.819 --rc geninfo_all_blocks=1 00:26:19.819 --rc geninfo_unexecuted_blocks=1 00:26:19.819 00:26:19.819 ' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.819 --rc genhtml_branch_coverage=1 00:26:19.819 --rc genhtml_function_coverage=1 00:26:19.819 --rc genhtml_legend=1 00:26:19.819 --rc geninfo_all_blocks=1 00:26:19.819 --rc geninfo_unexecuted_blocks=1 00:26:19.819 00:26:19.819 ' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.819 --rc genhtml_branch_coverage=1 00:26:19.819 --rc genhtml_function_coverage=1 00:26:19.819 --rc genhtml_legend=1 00:26:19.819 --rc geninfo_all_blocks=1 00:26:19.819 --rc geninfo_unexecuted_blocks=1 00:26:19.819 00:26:19.819 ' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:19.819 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.819 --rc genhtml_branch_coverage=1 00:26:19.819 --rc genhtml_function_coverage=1 00:26:19.819 --rc genhtml_legend=1 00:26:19.819 --rc geninfo_all_blocks=1 00:26:19.819 --rc geninfo_unexecuted_blocks=1 00:26:19.819 00:26:19.819 ' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:19.819 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:19.819 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:26:19.820 04:37:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:28.016 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:28.016 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:28.016 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:28.016 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:28.017 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:28.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.648 ms 00:26:28.017 00:26:28.017 --- 10.0.0.2 ping statistics --- 00:26:28.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.017 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:26:28.017 00:26:28.017 --- 10.0.0.1 ping statistics --- 00:26:28.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.017 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=3127242 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 3127242 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3127242 ']' 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:28.017 04:37:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.017 [2024-11-05 04:37:40.663940] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:26:28.017 [2024-11-05 04:37:40.664008] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.017 [2024-11-05 04:37:40.762121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.017 [2024-11-05 04:37:40.812410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.017 [2024-11-05 04:37:40.812461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.017 [2024-11-05 04:37:40.812470] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.017 [2024-11-05 04:37:40.812477] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.017 [2024-11-05 04:37:40.812483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.017 [2024-11-05 04:37:40.813287] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.017 [2024-11-05 04:37:41.535476] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.017 [2024-11-05 04:37:41.543716] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:28.017 null0 00:26:28.017 [2024-11-05 04:37:41.575666] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3127297 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3127297 /tmp/host.sock 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 3127297 ']' 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:28.017 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:28.018 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:28.018 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:28.018 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:28.018 04:37:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.018 [2024-11-05 04:37:41.653204] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:26:28.018 [2024-11-05 04:37:41.653263] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3127297 ] 00:26:28.278 [2024-11-05 04:37:41.728387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.279 [2024-11-05 04:37:41.771081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.848 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:28.848 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:26:28.848 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:28.849 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:28.849 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.849 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:28.849 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:28.849 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:28.849 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:28.849 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:29.109 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.109 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:29.109 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.109 04:37:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.049 [2024-11-05 04:37:43.575906] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:30.049 [2024-11-05 04:37:43.575927] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:30.049 [2024-11-05 04:37:43.575940] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:30.310 [2024-11-05 04:37:43.703342] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:30.310 [2024-11-05 04:37:43.803245] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:30.310 [2024-11-05 04:37:43.804322] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23e4390:1 started. 00:26:30.310 [2024-11-05 04:37:43.805884] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:30.310 [2024-11-05 04:37:43.805928] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:30.310 [2024-11-05 04:37:43.805950] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:30.310 [2024-11-05 04:37:43.805963] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:30.310 [2024-11-05 04:37:43.805984] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.310 [2024-11-05 04:37:43.813385] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23e4390 was disconnected and freed. delete nvme_qpair. 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:30.310 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:30.571 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:30.571 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:30.571 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.571 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:30.571 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:30.571 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:30.571 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:30.571 04:37:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:30.571 04:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:30.571 04:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:30.571 04:37:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:31.511 04:37:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:32.896 04:37:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:33.837 04:37:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:34.778 04:37:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:35.719 [2024-11-05 04:37:49.246679] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:35.719 [2024-11-05 04:37:49.246722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.719 [2024-11-05 04:37:49.246734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.719 [2024-11-05 04:37:49.246744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.719 [2024-11-05 04:37:49.246755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.719 [2024-11-05 04:37:49.246763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.719 [2024-11-05 04:37:49.246771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.719 [2024-11-05 04:37:49.246779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.719 [2024-11-05 04:37:49.246786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.719 [2024-11-05 04:37:49.246794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:35.719 [2024-11-05 04:37:49.246802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:35.719 [2024-11-05 04:37:49.246809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c00 is same with the state(6) to be set 00:26:35.719 [2024-11-05 04:37:49.256699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c00 (9): Bad file descriptor 00:26:35.719 [2024-11-05 04:37:49.266737] bdev_nvme.c:2536:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:35.719 [2024-11-05 04:37:49.266752] bdev_nvme.c:2524:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:35.719 [2024-11-05 04:37:49.266758] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:35.719 [2024-11-05 04:37:49.266764] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:35.719 [2024-11-05 04:37:49.266785] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:35.719 04:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:35.719 04:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.719 04:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:35.719 04:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.719 04:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:35.719 04:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:35.719 04:37:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:37.101 [2024-11-05 04:37:50.324805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:37.101 [2024-11-05 04:37:50.324864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23c0c00 with addr=10.0.0.2, port=4420 00:26:37.101 [2024-11-05 04:37:50.324880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c0c00 is same with the state(6) to be set 00:26:37.101 [2024-11-05 04:37:50.324914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23c0c00 (9): Bad file descriptor 00:26:37.101 [2024-11-05 04:37:50.325323] bdev_nvme.c:3152:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:26:37.101 [2024-11-05 04:37:50.325351] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:37.101 [2024-11-05 04:37:50.325359] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:37.101 [2024-11-05 04:37:50.325369] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:37.101 [2024-11-05 04:37:50.325376] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:37.101 [2024-11-05 04:37:50.325383] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:37.101 [2024-11-05 04:37:50.325399] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:37.101 [2024-11-05 04:37:50.325408] bdev_nvme.c:2112:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:37.101 [2024-11-05 04:37:50.325414] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:37.101 04:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:37.101 04:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:37.101 04:37:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.042 [2024-11-05 04:37:51.327789] bdev_nvme.c:2508:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:38.042 [2024-11-05 04:37:51.327812] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:38.042 [2024-11-05 04:37:51.327825] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:38.042 [2024-11-05 04:37:51.327832] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:38.042 [2024-11-05 04:37:51.327840] nvme_ctrlr.c:1071:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:26:38.042 [2024-11-05 04:37:51.327847] bdev_nvme.c:2498:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:38.042 [2024-11-05 04:37:51.327853] bdev_nvme.c:2305:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:38.042 [2024-11-05 04:37:51.327865] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:38.042 [2024-11-05 04:37:51.327883] bdev_nvme.c:7042:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:38.042 [2024-11-05 04:37:51.327910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.042 [2024-11-05 04:37:51.327920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.042 [2024-11-05 04:37:51.327936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.042 [2024-11-05 04:37:51.327944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.042 [2024-11-05 04:37:51.327952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.042 [2024-11-05 04:37:51.327960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.042 [2024-11-05 04:37:51.327969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.042 [2024-11-05 04:37:51.327976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.042 [2024-11-05 04:37:51.327985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:38.042 [2024-11-05 04:37:51.327992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:38.042 [2024-11-05 04:37:51.328000] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:26:38.042 [2024-11-05 04:37:51.328313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b0340 (9): Bad file descriptor 00:26:38.042 [2024-11-05 04:37:51.329326] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:38.042 [2024-11-05 04:37:51.329337] nvme_ctrlr.c:1190:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:26:38.042 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.042 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.042 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.042 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.042 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.042 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:38.043 04:37:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:38.983 04:37:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:39.923 [2024-11-05 04:37:53.379839] bdev_nvme.c:7291:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:39.923 [2024-11-05 04:37:53.379856] bdev_nvme.c:7377:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:39.923 [2024-11-05 04:37:53.379869] bdev_nvme.c:7254:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:39.923 [2024-11-05 04:37:53.507305] bdev_nvme.c:7220:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:40.183 04:37:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:40.183 [2024-11-05 04:37:53.687391] bdev_nvme.c:5582:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:26:40.183 [2024-11-05 04:37:53.688263] bdev_nvme.c:1963:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x23bc3d0:1 started. 00:26:40.183 [2024-11-05 04:37:53.689486] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:40.183 [2024-11-05 04:37:53.689521] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:40.183 [2024-11-05 04:37:53.689540] bdev_nvme.c:8087:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:40.183 [2024-11-05 04:37:53.689554] bdev_nvme.c:7110:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:40.183 [2024-11-05 04:37:53.689562] bdev_nvme.c:7069:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:40.183 [2024-11-05 04:37:53.697557] bdev_nvme.c:1779:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x23bc3d0 was disconnected and freed. delete nvme_qpair. 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3127297 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3127297 ']' 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3127297 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:41.123 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3127297 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3127297' 00:26:41.383 killing process with pid 3127297 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3127297 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3127297 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:41.383 rmmod nvme_tcp 00:26:41.383 rmmod nvme_fabrics 00:26:41.383 rmmod nvme_keyring 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 3127242 ']' 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 3127242 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 3127242 ']' 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 3127242 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:41.383 04:37:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3127242 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3127242' 00:26:41.644 killing process with pid 3127242 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 3127242 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 3127242 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:41.644 04:37:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:44.186 00:26:44.186 real 0m24.257s 00:26:44.186 user 0m29.357s 00:26:44.186 sys 0m6.995s 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:44.186 ************************************ 00:26:44.186 END TEST nvmf_discovery_remove_ifc 00:26:44.186 ************************************ 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.186 ************************************ 00:26:44.186 START TEST nvmf_identify_kernel_target 00:26:44.186 ************************************ 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:44.186 * Looking for test storage... 00:26:44.186 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.186 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.187 --rc genhtml_branch_coverage=1 00:26:44.187 --rc genhtml_function_coverage=1 00:26:44.187 --rc genhtml_legend=1 00:26:44.187 --rc geninfo_all_blocks=1 00:26:44.187 --rc geninfo_unexecuted_blocks=1 00:26:44.187 00:26:44.187 ' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.187 --rc genhtml_branch_coverage=1 00:26:44.187 --rc genhtml_function_coverage=1 00:26:44.187 --rc genhtml_legend=1 00:26:44.187 --rc geninfo_all_blocks=1 00:26:44.187 --rc geninfo_unexecuted_blocks=1 00:26:44.187 00:26:44.187 ' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.187 --rc genhtml_branch_coverage=1 00:26:44.187 --rc genhtml_function_coverage=1 00:26:44.187 --rc genhtml_legend=1 00:26:44.187 --rc geninfo_all_blocks=1 00:26:44.187 --rc geninfo_unexecuted_blocks=1 00:26:44.187 00:26:44.187 ' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:44.187 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.187 --rc genhtml_branch_coverage=1 00:26:44.187 --rc genhtml_function_coverage=1 00:26:44.187 --rc genhtml_legend=1 00:26:44.187 --rc geninfo_all_blocks=1 00:26:44.187 --rc geninfo_unexecuted_blocks=1 00:26:44.187 00:26:44.187 ' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:44.187 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:26:44.187 04:37:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:52.327 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.327 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:52.328 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:52.328 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:52.328 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:52.328 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.328 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.697 ms 00:26:52.328 00:26:52.328 --- 10.0.0.2 ping statistics --- 00:26:52.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.328 rtt min/avg/max/mdev = 0.697/0.697/0.697/0.000 ms 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.328 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.328 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:26:52.328 00:26:52.328 --- 10.0.0.1 ping statistics --- 00:26:52.328 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.328 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:52.328 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:52.329 04:38:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:54.874 Waiting for block devices as requested 00:26:54.874 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:54.874 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:55.135 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:55.135 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:55.135 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:55.397 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:55.397 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:55.397 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:55.657 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:26:55.657 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:26:55.917 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:26:55.917 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:26:55.917 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:26:55.917 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:26:56.180 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:26:56.180 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:26:56.180 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:26:56.439 04:38:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:56.439 No valid GPT data, bailing 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:56.439 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:26:56.700 00:26:56.700 Discovery Log Number of Records 2, Generation counter 2 00:26:56.700 =====Discovery Log Entry 0====== 00:26:56.700 trtype: tcp 00:26:56.700 adrfam: ipv4 00:26:56.700 subtype: current discovery subsystem 00:26:56.700 treq: not specified, sq flow control disable supported 00:26:56.700 portid: 1 00:26:56.700 trsvcid: 4420 00:26:56.700 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:56.700 traddr: 10.0.0.1 00:26:56.700 eflags: none 00:26:56.700 sectype: none 00:26:56.700 =====Discovery Log Entry 1====== 00:26:56.700 trtype: tcp 00:26:56.700 adrfam: ipv4 00:26:56.700 subtype: nvme subsystem 00:26:56.700 treq: not specified, sq flow control disable supported 00:26:56.700 portid: 1 00:26:56.700 trsvcid: 4420 00:26:56.700 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:56.700 traddr: 10.0.0.1 00:26:56.700 eflags: none 00:26:56.700 sectype: none 00:26:56.700 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:56.700 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:56.700 ===================================================== 00:26:56.700 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:56.700 ===================================================== 00:26:56.700 Controller Capabilities/Features 00:26:56.700 ================================ 00:26:56.700 Vendor ID: 0000 00:26:56.700 Subsystem Vendor ID: 0000 00:26:56.700 Serial Number: 6443e58f8a04d6ad813c 00:26:56.700 Model Number: Linux 00:26:56.700 Firmware Version: 6.8.9-20 00:26:56.700 Recommended Arb Burst: 0 00:26:56.700 IEEE OUI Identifier: 00 00 00 00:26:56.700 Multi-path I/O 00:26:56.700 May have multiple subsystem ports: No 00:26:56.700 May have multiple controllers: No 00:26:56.700 Associated with SR-IOV VF: No 00:26:56.700 Max Data Transfer Size: Unlimited 00:26:56.700 Max Number of Namespaces: 0 00:26:56.700 Max Number of I/O Queues: 1024 00:26:56.700 NVMe Specification Version (VS): 1.3 00:26:56.700 NVMe Specification Version (Identify): 1.3 00:26:56.700 Maximum Queue Entries: 1024 00:26:56.700 Contiguous Queues Required: No 00:26:56.700 Arbitration Mechanisms Supported 00:26:56.700 Weighted Round Robin: Not Supported 00:26:56.700 Vendor Specific: Not Supported 00:26:56.700 Reset Timeout: 7500 ms 00:26:56.700 Doorbell Stride: 4 bytes 00:26:56.700 NVM Subsystem Reset: Not Supported 00:26:56.700 Command Sets Supported 00:26:56.700 NVM Command Set: Supported 00:26:56.700 Boot Partition: Not Supported 00:26:56.700 Memory Page Size Minimum: 4096 bytes 00:26:56.700 Memory Page Size Maximum: 4096 bytes 00:26:56.700 Persistent Memory Region: Not Supported 00:26:56.701 Optional Asynchronous Events Supported 00:26:56.701 Namespace Attribute Notices: Not Supported 00:26:56.701 Firmware Activation Notices: Not Supported 00:26:56.701 ANA Change Notices: Not Supported 00:26:56.701 PLE Aggregate Log Change Notices: Not Supported 00:26:56.701 LBA Status Info Alert Notices: Not Supported 00:26:56.701 EGE Aggregate Log Change Notices: Not Supported 00:26:56.701 Normal NVM Subsystem Shutdown event: Not Supported 00:26:56.701 Zone Descriptor Change Notices: Not Supported 00:26:56.701 Discovery Log Change Notices: Supported 00:26:56.701 Controller Attributes 00:26:56.701 128-bit Host Identifier: Not Supported 00:26:56.701 Non-Operational Permissive Mode: Not Supported 00:26:56.701 NVM Sets: Not Supported 00:26:56.701 Read Recovery Levels: Not Supported 00:26:56.701 Endurance Groups: Not Supported 00:26:56.701 Predictable Latency Mode: Not Supported 00:26:56.701 Traffic Based Keep ALive: Not Supported 00:26:56.701 Namespace Granularity: Not Supported 00:26:56.701 SQ Associations: Not Supported 00:26:56.701 UUID List: Not Supported 00:26:56.701 Multi-Domain Subsystem: Not Supported 00:26:56.701 Fixed Capacity Management: Not Supported 00:26:56.701 Variable Capacity Management: Not Supported 00:26:56.701 Delete Endurance Group: Not Supported 00:26:56.701 Delete NVM Set: Not Supported 00:26:56.701 Extended LBA Formats Supported: Not Supported 00:26:56.701 Flexible Data Placement Supported: Not Supported 00:26:56.701 00:26:56.701 Controller Memory Buffer Support 00:26:56.701 ================================ 00:26:56.701 Supported: No 00:26:56.701 00:26:56.701 Persistent Memory Region Support 00:26:56.701 ================================ 00:26:56.701 Supported: No 00:26:56.701 00:26:56.701 Admin Command Set Attributes 00:26:56.701 ============================ 00:26:56.701 Security Send/Receive: Not Supported 00:26:56.701 Format NVM: Not Supported 00:26:56.701 Firmware Activate/Download: Not Supported 00:26:56.701 Namespace Management: Not Supported 00:26:56.701 Device Self-Test: Not Supported 00:26:56.701 Directives: Not Supported 00:26:56.701 NVMe-MI: Not Supported 00:26:56.701 Virtualization Management: Not Supported 00:26:56.701 Doorbell Buffer Config: Not Supported 00:26:56.701 Get LBA Status Capability: Not Supported 00:26:56.701 Command & Feature Lockdown Capability: Not Supported 00:26:56.701 Abort Command Limit: 1 00:26:56.701 Async Event Request Limit: 1 00:26:56.701 Number of Firmware Slots: N/A 00:26:56.701 Firmware Slot 1 Read-Only: N/A 00:26:56.701 Firmware Activation Without Reset: N/A 00:26:56.701 Multiple Update Detection Support: N/A 00:26:56.701 Firmware Update Granularity: No Information Provided 00:26:56.701 Per-Namespace SMART Log: No 00:26:56.701 Asymmetric Namespace Access Log Page: Not Supported 00:26:56.701 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:56.701 Command Effects Log Page: Not Supported 00:26:56.701 Get Log Page Extended Data: Supported 00:26:56.701 Telemetry Log Pages: Not Supported 00:26:56.701 Persistent Event Log Pages: Not Supported 00:26:56.701 Supported Log Pages Log Page: May Support 00:26:56.701 Commands Supported & Effects Log Page: Not Supported 00:26:56.701 Feature Identifiers & Effects Log Page:May Support 00:26:56.701 NVMe-MI Commands & Effects Log Page: May Support 00:26:56.701 Data Area 4 for Telemetry Log: Not Supported 00:26:56.701 Error Log Page Entries Supported: 1 00:26:56.701 Keep Alive: Not Supported 00:26:56.701 00:26:56.701 NVM Command Set Attributes 00:26:56.701 ========================== 00:26:56.701 Submission Queue Entry Size 00:26:56.701 Max: 1 00:26:56.701 Min: 1 00:26:56.701 Completion Queue Entry Size 00:26:56.701 Max: 1 00:26:56.701 Min: 1 00:26:56.701 Number of Namespaces: 0 00:26:56.701 Compare Command: Not Supported 00:26:56.701 Write Uncorrectable Command: Not Supported 00:26:56.701 Dataset Management Command: Not Supported 00:26:56.701 Write Zeroes Command: Not Supported 00:26:56.701 Set Features Save Field: Not Supported 00:26:56.701 Reservations: Not Supported 00:26:56.701 Timestamp: Not Supported 00:26:56.701 Copy: Not Supported 00:26:56.701 Volatile Write Cache: Not Present 00:26:56.701 Atomic Write Unit (Normal): 1 00:26:56.701 Atomic Write Unit (PFail): 1 00:26:56.701 Atomic Compare & Write Unit: 1 00:26:56.701 Fused Compare & Write: Not Supported 00:26:56.701 Scatter-Gather List 00:26:56.701 SGL Command Set: Supported 00:26:56.701 SGL Keyed: Not Supported 00:26:56.701 SGL Bit Bucket Descriptor: Not Supported 00:26:56.701 SGL Metadata Pointer: Not Supported 00:26:56.701 Oversized SGL: Not Supported 00:26:56.701 SGL Metadata Address: Not Supported 00:26:56.701 SGL Offset: Supported 00:26:56.701 Transport SGL Data Block: Not Supported 00:26:56.701 Replay Protected Memory Block: Not Supported 00:26:56.701 00:26:56.701 Firmware Slot Information 00:26:56.701 ========================= 00:26:56.701 Active slot: 0 00:26:56.701 00:26:56.701 00:26:56.701 Error Log 00:26:56.701 ========= 00:26:56.701 00:26:56.701 Active Namespaces 00:26:56.701 ================= 00:26:56.701 Discovery Log Page 00:26:56.701 ================== 00:26:56.701 Generation Counter: 2 00:26:56.701 Number of Records: 2 00:26:56.701 Record Format: 0 00:26:56.701 00:26:56.701 Discovery Log Entry 0 00:26:56.701 ---------------------- 00:26:56.701 Transport Type: 3 (TCP) 00:26:56.701 Address Family: 1 (IPv4) 00:26:56.701 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:56.701 Entry Flags: 00:26:56.701 Duplicate Returned Information: 0 00:26:56.701 Explicit Persistent Connection Support for Discovery: 0 00:26:56.701 Transport Requirements: 00:26:56.701 Secure Channel: Not Specified 00:26:56.701 Port ID: 1 (0x0001) 00:26:56.701 Controller ID: 65535 (0xffff) 00:26:56.701 Admin Max SQ Size: 32 00:26:56.701 Transport Service Identifier: 4420 00:26:56.701 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:56.701 Transport Address: 10.0.0.1 00:26:56.701 Discovery Log Entry 1 00:26:56.701 ---------------------- 00:26:56.701 Transport Type: 3 (TCP) 00:26:56.701 Address Family: 1 (IPv4) 00:26:56.701 Subsystem Type: 2 (NVM Subsystem) 00:26:56.701 Entry Flags: 00:26:56.701 Duplicate Returned Information: 0 00:26:56.701 Explicit Persistent Connection Support for Discovery: 0 00:26:56.701 Transport Requirements: 00:26:56.701 Secure Channel: Not Specified 00:26:56.701 Port ID: 1 (0x0001) 00:26:56.701 Controller ID: 65535 (0xffff) 00:26:56.701 Admin Max SQ Size: 32 00:26:56.701 Transport Service Identifier: 4420 00:26:56.701 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:56.701 Transport Address: 10.0.0.1 00:26:56.701 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:56.963 get_feature(0x01) failed 00:26:56.963 get_feature(0x02) failed 00:26:56.963 get_feature(0x04) failed 00:26:56.963 ===================================================== 00:26:56.963 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:56.963 ===================================================== 00:26:56.963 Controller Capabilities/Features 00:26:56.963 ================================ 00:26:56.963 Vendor ID: 0000 00:26:56.963 Subsystem Vendor ID: 0000 00:26:56.963 Serial Number: bf10cfa55b2fbd4c204b 00:26:56.963 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:56.963 Firmware Version: 6.8.9-20 00:26:56.963 Recommended Arb Burst: 6 00:26:56.963 IEEE OUI Identifier: 00 00 00 00:26:56.963 Multi-path I/O 00:26:56.963 May have multiple subsystem ports: Yes 00:26:56.963 May have multiple controllers: Yes 00:26:56.963 Associated with SR-IOV VF: No 00:26:56.963 Max Data Transfer Size: Unlimited 00:26:56.963 Max Number of Namespaces: 1024 00:26:56.963 Max Number of I/O Queues: 128 00:26:56.963 NVMe Specification Version (VS): 1.3 00:26:56.963 NVMe Specification Version (Identify): 1.3 00:26:56.963 Maximum Queue Entries: 1024 00:26:56.963 Contiguous Queues Required: No 00:26:56.963 Arbitration Mechanisms Supported 00:26:56.963 Weighted Round Robin: Not Supported 00:26:56.963 Vendor Specific: Not Supported 00:26:56.963 Reset Timeout: 7500 ms 00:26:56.963 Doorbell Stride: 4 bytes 00:26:56.963 NVM Subsystem Reset: Not Supported 00:26:56.963 Command Sets Supported 00:26:56.963 NVM Command Set: Supported 00:26:56.963 Boot Partition: Not Supported 00:26:56.963 Memory Page Size Minimum: 4096 bytes 00:26:56.963 Memory Page Size Maximum: 4096 bytes 00:26:56.963 Persistent Memory Region: Not Supported 00:26:56.963 Optional Asynchronous Events Supported 00:26:56.963 Namespace Attribute Notices: Supported 00:26:56.963 Firmware Activation Notices: Not Supported 00:26:56.963 ANA Change Notices: Supported 00:26:56.963 PLE Aggregate Log Change Notices: Not Supported 00:26:56.963 LBA Status Info Alert Notices: Not Supported 00:26:56.963 EGE Aggregate Log Change Notices: Not Supported 00:26:56.963 Normal NVM Subsystem Shutdown event: Not Supported 00:26:56.963 Zone Descriptor Change Notices: Not Supported 00:26:56.963 Discovery Log Change Notices: Not Supported 00:26:56.963 Controller Attributes 00:26:56.963 128-bit Host Identifier: Supported 00:26:56.963 Non-Operational Permissive Mode: Not Supported 00:26:56.963 NVM Sets: Not Supported 00:26:56.963 Read Recovery Levels: Not Supported 00:26:56.963 Endurance Groups: Not Supported 00:26:56.963 Predictable Latency Mode: Not Supported 00:26:56.963 Traffic Based Keep ALive: Supported 00:26:56.963 Namespace Granularity: Not Supported 00:26:56.963 SQ Associations: Not Supported 00:26:56.963 UUID List: Not Supported 00:26:56.963 Multi-Domain Subsystem: Not Supported 00:26:56.963 Fixed Capacity Management: Not Supported 00:26:56.963 Variable Capacity Management: Not Supported 00:26:56.963 Delete Endurance Group: Not Supported 00:26:56.963 Delete NVM Set: Not Supported 00:26:56.963 Extended LBA Formats Supported: Not Supported 00:26:56.963 Flexible Data Placement Supported: Not Supported 00:26:56.963 00:26:56.963 Controller Memory Buffer Support 00:26:56.963 ================================ 00:26:56.963 Supported: No 00:26:56.963 00:26:56.963 Persistent Memory Region Support 00:26:56.963 ================================ 00:26:56.963 Supported: No 00:26:56.963 00:26:56.963 Admin Command Set Attributes 00:26:56.963 ============================ 00:26:56.963 Security Send/Receive: Not Supported 00:26:56.963 Format NVM: Not Supported 00:26:56.963 Firmware Activate/Download: Not Supported 00:26:56.963 Namespace Management: Not Supported 00:26:56.963 Device Self-Test: Not Supported 00:26:56.963 Directives: Not Supported 00:26:56.963 NVMe-MI: Not Supported 00:26:56.963 Virtualization Management: Not Supported 00:26:56.963 Doorbell Buffer Config: Not Supported 00:26:56.963 Get LBA Status Capability: Not Supported 00:26:56.963 Command & Feature Lockdown Capability: Not Supported 00:26:56.963 Abort Command Limit: 4 00:26:56.963 Async Event Request Limit: 4 00:26:56.963 Number of Firmware Slots: N/A 00:26:56.963 Firmware Slot 1 Read-Only: N/A 00:26:56.963 Firmware Activation Without Reset: N/A 00:26:56.963 Multiple Update Detection Support: N/A 00:26:56.963 Firmware Update Granularity: No Information Provided 00:26:56.963 Per-Namespace SMART Log: Yes 00:26:56.963 Asymmetric Namespace Access Log Page: Supported 00:26:56.963 ANA Transition Time : 10 sec 00:26:56.963 00:26:56.963 Asymmetric Namespace Access Capabilities 00:26:56.963 ANA Optimized State : Supported 00:26:56.963 ANA Non-Optimized State : Supported 00:26:56.963 ANA Inaccessible State : Supported 00:26:56.963 ANA Persistent Loss State : Supported 00:26:56.963 ANA Change State : Supported 00:26:56.963 ANAGRPID is not changed : No 00:26:56.963 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:56.963 00:26:56.963 ANA Group Identifier Maximum : 128 00:26:56.963 Number of ANA Group Identifiers : 128 00:26:56.963 Max Number of Allowed Namespaces : 1024 00:26:56.963 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:56.963 Command Effects Log Page: Supported 00:26:56.963 Get Log Page Extended Data: Supported 00:26:56.963 Telemetry Log Pages: Not Supported 00:26:56.963 Persistent Event Log Pages: Not Supported 00:26:56.963 Supported Log Pages Log Page: May Support 00:26:56.963 Commands Supported & Effects Log Page: Not Supported 00:26:56.963 Feature Identifiers & Effects Log Page:May Support 00:26:56.963 NVMe-MI Commands & Effects Log Page: May Support 00:26:56.963 Data Area 4 for Telemetry Log: Not Supported 00:26:56.963 Error Log Page Entries Supported: 128 00:26:56.963 Keep Alive: Supported 00:26:56.963 Keep Alive Granularity: 1000 ms 00:26:56.964 00:26:56.964 NVM Command Set Attributes 00:26:56.964 ========================== 00:26:56.964 Submission Queue Entry Size 00:26:56.964 Max: 64 00:26:56.964 Min: 64 00:26:56.964 Completion Queue Entry Size 00:26:56.964 Max: 16 00:26:56.964 Min: 16 00:26:56.964 Number of Namespaces: 1024 00:26:56.964 Compare Command: Not Supported 00:26:56.964 Write Uncorrectable Command: Not Supported 00:26:56.964 Dataset Management Command: Supported 00:26:56.964 Write Zeroes Command: Supported 00:26:56.964 Set Features Save Field: Not Supported 00:26:56.964 Reservations: Not Supported 00:26:56.964 Timestamp: Not Supported 00:26:56.964 Copy: Not Supported 00:26:56.964 Volatile Write Cache: Present 00:26:56.964 Atomic Write Unit (Normal): 1 00:26:56.964 Atomic Write Unit (PFail): 1 00:26:56.964 Atomic Compare & Write Unit: 1 00:26:56.964 Fused Compare & Write: Not Supported 00:26:56.964 Scatter-Gather List 00:26:56.964 SGL Command Set: Supported 00:26:56.964 SGL Keyed: Not Supported 00:26:56.964 SGL Bit Bucket Descriptor: Not Supported 00:26:56.964 SGL Metadata Pointer: Not Supported 00:26:56.964 Oversized SGL: Not Supported 00:26:56.964 SGL Metadata Address: Not Supported 00:26:56.964 SGL Offset: Supported 00:26:56.964 Transport SGL Data Block: Not Supported 00:26:56.964 Replay Protected Memory Block: Not Supported 00:26:56.964 00:26:56.964 Firmware Slot Information 00:26:56.964 ========================= 00:26:56.964 Active slot: 0 00:26:56.964 00:26:56.964 Asymmetric Namespace Access 00:26:56.964 =========================== 00:26:56.964 Change Count : 0 00:26:56.964 Number of ANA Group Descriptors : 1 00:26:56.964 ANA Group Descriptor : 0 00:26:56.964 ANA Group ID : 1 00:26:56.964 Number of NSID Values : 1 00:26:56.964 Change Count : 0 00:26:56.964 ANA State : 1 00:26:56.964 Namespace Identifier : 1 00:26:56.964 00:26:56.964 Commands Supported and Effects 00:26:56.964 ============================== 00:26:56.964 Admin Commands 00:26:56.964 -------------- 00:26:56.964 Get Log Page (02h): Supported 00:26:56.964 Identify (06h): Supported 00:26:56.964 Abort (08h): Supported 00:26:56.964 Set Features (09h): Supported 00:26:56.964 Get Features (0Ah): Supported 00:26:56.964 Asynchronous Event Request (0Ch): Supported 00:26:56.964 Keep Alive (18h): Supported 00:26:56.964 I/O Commands 00:26:56.964 ------------ 00:26:56.964 Flush (00h): Supported 00:26:56.964 Write (01h): Supported LBA-Change 00:26:56.964 Read (02h): Supported 00:26:56.964 Write Zeroes (08h): Supported LBA-Change 00:26:56.964 Dataset Management (09h): Supported 00:26:56.964 00:26:56.964 Error Log 00:26:56.964 ========= 00:26:56.964 Entry: 0 00:26:56.964 Error Count: 0x3 00:26:56.964 Submission Queue Id: 0x0 00:26:56.964 Command Id: 0x5 00:26:56.964 Phase Bit: 0 00:26:56.964 Status Code: 0x2 00:26:56.964 Status Code Type: 0x0 00:26:56.964 Do Not Retry: 1 00:26:56.964 Error Location: 0x28 00:26:56.964 LBA: 0x0 00:26:56.964 Namespace: 0x0 00:26:56.964 Vendor Log Page: 0x0 00:26:56.964 ----------- 00:26:56.964 Entry: 1 00:26:56.964 Error Count: 0x2 00:26:56.964 Submission Queue Id: 0x0 00:26:56.964 Command Id: 0x5 00:26:56.964 Phase Bit: 0 00:26:56.964 Status Code: 0x2 00:26:56.964 Status Code Type: 0x0 00:26:56.964 Do Not Retry: 1 00:26:56.964 Error Location: 0x28 00:26:56.964 LBA: 0x0 00:26:56.964 Namespace: 0x0 00:26:56.964 Vendor Log Page: 0x0 00:26:56.964 ----------- 00:26:56.964 Entry: 2 00:26:56.964 Error Count: 0x1 00:26:56.964 Submission Queue Id: 0x0 00:26:56.964 Command Id: 0x4 00:26:56.964 Phase Bit: 0 00:26:56.964 Status Code: 0x2 00:26:56.964 Status Code Type: 0x0 00:26:56.964 Do Not Retry: 1 00:26:56.964 Error Location: 0x28 00:26:56.964 LBA: 0x0 00:26:56.964 Namespace: 0x0 00:26:56.964 Vendor Log Page: 0x0 00:26:56.964 00:26:56.964 Number of Queues 00:26:56.964 ================ 00:26:56.964 Number of I/O Submission Queues: 128 00:26:56.964 Number of I/O Completion Queues: 128 00:26:56.964 00:26:56.964 ZNS Specific Controller Data 00:26:56.964 ============================ 00:26:56.964 Zone Append Size Limit: 0 00:26:56.964 00:26:56.964 00:26:56.964 Active Namespaces 00:26:56.964 ================= 00:26:56.964 get_feature(0x05) failed 00:26:56.964 Namespace ID:1 00:26:56.964 Command Set Identifier: NVM (00h) 00:26:56.964 Deallocate: Supported 00:26:56.964 Deallocated/Unwritten Error: Not Supported 00:26:56.964 Deallocated Read Value: Unknown 00:26:56.964 Deallocate in Write Zeroes: Not Supported 00:26:56.964 Deallocated Guard Field: 0xFFFF 00:26:56.964 Flush: Supported 00:26:56.964 Reservation: Not Supported 00:26:56.964 Namespace Sharing Capabilities: Multiple Controllers 00:26:56.964 Size (in LBAs): 3750748848 (1788GiB) 00:26:56.964 Capacity (in LBAs): 3750748848 (1788GiB) 00:26:56.964 Utilization (in LBAs): 3750748848 (1788GiB) 00:26:56.964 UUID: 8d6d8a14-6de9-4de8-a81f-e74793090d41 00:26:56.964 Thin Provisioning: Not Supported 00:26:56.964 Per-NS Atomic Units: Yes 00:26:56.964 Atomic Write Unit (Normal): 8 00:26:56.964 Atomic Write Unit (PFail): 8 00:26:56.964 Preferred Write Granularity: 8 00:26:56.964 Atomic Compare & Write Unit: 8 00:26:56.964 Atomic Boundary Size (Normal): 0 00:26:56.964 Atomic Boundary Size (PFail): 0 00:26:56.964 Atomic Boundary Offset: 0 00:26:56.964 NGUID/EUI64 Never Reused: No 00:26:56.964 ANA group ID: 1 00:26:56.964 Namespace Write Protected: No 00:26:56.964 Number of LBA Formats: 1 00:26:56.964 Current LBA Format: LBA Format #00 00:26:56.964 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:56.964 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:56.964 rmmod nvme_tcp 00:26:56.964 rmmod nvme_fabrics 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:56.964 04:38:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:58.876 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:58.876 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:58.876 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:58.876 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:59.137 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.137 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:59.137 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:59.137 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:59.137 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:59.137 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:59.137 04:38:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:02.440 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:27:02.440 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:27:03.011 00:27:03.011 real 0m19.130s 00:27:03.011 user 0m5.300s 00:27:03.011 sys 0m10.886s 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:03.011 ************************************ 00:27:03.011 END TEST nvmf_identify_kernel_target 00:27:03.011 ************************************ 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.011 ************************************ 00:27:03.011 START TEST nvmf_auth_host 00:27:03.011 ************************************ 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:03.011 * Looking for test storage... 00:27:03.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:03.011 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:03.273 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:03.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.273 --rc genhtml_branch_coverage=1 00:27:03.273 --rc genhtml_function_coverage=1 00:27:03.273 --rc genhtml_legend=1 00:27:03.273 --rc geninfo_all_blocks=1 00:27:03.273 --rc geninfo_unexecuted_blocks=1 00:27:03.273 00:27:03.273 ' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:03.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.274 --rc genhtml_branch_coverage=1 00:27:03.274 --rc genhtml_function_coverage=1 00:27:03.274 --rc genhtml_legend=1 00:27:03.274 --rc geninfo_all_blocks=1 00:27:03.274 --rc geninfo_unexecuted_blocks=1 00:27:03.274 00:27:03.274 ' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:03.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.274 --rc genhtml_branch_coverage=1 00:27:03.274 --rc genhtml_function_coverage=1 00:27:03.274 --rc genhtml_legend=1 00:27:03.274 --rc geninfo_all_blocks=1 00:27:03.274 --rc geninfo_unexecuted_blocks=1 00:27:03.274 00:27:03.274 ' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:03.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:03.274 --rc genhtml_branch_coverage=1 00:27:03.274 --rc genhtml_function_coverage=1 00:27:03.274 --rc genhtml_legend=1 00:27:03.274 --rc geninfo_all_blocks=1 00:27:03.274 --rc geninfo_unexecuted_blocks=1 00:27:03.274 00:27:03.274 ' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:03.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:03.274 04:38:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:11.416 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:11.416 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:11.416 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:11.416 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.416 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:11.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.657 ms 00:27:11.417 00:27:11.417 --- 10.0.0.2 ping statistics --- 00:27:11.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.417 rtt min/avg/max/mdev = 0.657/0.657/0.657/0.000 ms 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:27:11.417 00:27:11.417 --- 10.0.0.1 ping statistics --- 00:27:11.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.417 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=3141775 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 3141775 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3141775 ']' 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:11.417 04:38:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c86d872b7ea2ebee72be7762cfc3fff2 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pch 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c86d872b7ea2ebee72be7762cfc3fff2 0 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c86d872b7ea2ebee72be7762cfc3fff2 0 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c86d872b7ea2ebee72be7762cfc3fff2 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pch 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pch 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.pch 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6d5731e7aa04236e8b52b8e9c7ab695ed4c397973e0ed77789189a8a50cd7bfd 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.OGM 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6d5731e7aa04236e8b52b8e9c7ab695ed4c397973e0ed77789189a8a50cd7bfd 3 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6d5731e7aa04236e8b52b8e9c7ab695ed4c397973e0ed77789189a8a50cd7bfd 3 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6d5731e7aa04236e8b52b8e9c7ab695ed4c397973e0ed77789189a8a50cd7bfd 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.OGM 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.OGM 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.OGM 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3948ac9ea6bc50bd82abd50d83720bf2a48b8bf4cbc1cb6a 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.qNQ 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3948ac9ea6bc50bd82abd50d83720bf2a48b8bf4cbc1cb6a 0 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3948ac9ea6bc50bd82abd50d83720bf2a48b8bf4cbc1cb6a 0 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3948ac9ea6bc50bd82abd50d83720bf2a48b8bf4cbc1cb6a 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.qNQ 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.qNQ 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.qNQ 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8b80e514131810cf1e4ec2cad7eff87b58585ed3365282f0 00:27:11.417 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:11.418 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zXb 00:27:11.418 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8b80e514131810cf1e4ec2cad7eff87b58585ed3365282f0 2 00:27:11.418 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8b80e514131810cf1e4ec2cad7eff87b58585ed3365282f0 2 00:27:11.418 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.418 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.418 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8b80e514131810cf1e4ec2cad7eff87b58585ed3365282f0 00:27:11.418 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:11.418 04:38:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zXb 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zXb 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.zXb 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=89a423de3c750bc90648a772889cdbf2 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.E8t 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 89a423de3c750bc90648a772889cdbf2 1 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 89a423de3c750bc90648a772889cdbf2 1 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=89a423de3c750bc90648a772889cdbf2 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:11.418 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.E8t 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.E8t 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.E8t 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=29f5a85bd78376100c9496db3b0fcec5 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.4g2 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 29f5a85bd78376100c9496db3b0fcec5 1 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 29f5a85bd78376100c9496db3b0fcec5 1 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=29f5a85bd78376100c9496db3b0fcec5 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.4g2 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.4g2 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.4g2 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3c56889155ed523f06e37b388ff5cf3b15a9c0e0eea373e1 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.XWu 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3c56889155ed523f06e37b388ff5cf3b15a9c0e0eea373e1 2 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3c56889155ed523f06e37b388ff5cf3b15a9c0e0eea373e1 2 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3c56889155ed523f06e37b388ff5cf3b15a9c0e0eea373e1 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.XWu 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.XWu 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.XWu 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=535f0a7220ba1edcd417b6afbc782d3b 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.l20 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 535f0a7220ba1edcd417b6afbc782d3b 0 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 535f0a7220ba1edcd417b6afbc782d3b 0 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=535f0a7220ba1edcd417b6afbc782d3b 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.l20 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.l20 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.l20 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=438c1275a385f6408da016c3591fdb6842006153ac4b69c2354dd00c86e7b7a9 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.6qB 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 438c1275a385f6408da016c3591fdb6842006153ac4b69c2354dd00c86e7b7a9 3 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 438c1275a385f6408da016c3591fdb6842006153ac4b69c2354dd00c86e7b7a9 3 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=438c1275a385f6408da016c3591fdb6842006153ac4b69c2354dd00c86e7b7a9 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:11.679 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.6qB 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.6qB 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.6qB 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3141775 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 3141775 ']' 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:11.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pch 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.OGM ]] 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OGM 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.qNQ 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.941 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.zXb ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.zXb 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.E8t 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.4g2 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.4g2 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.XWu 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.l20 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.l20 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6qB 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:12.202 04:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:15.500 Waiting for block devices as requested 00:27:15.500 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:15.500 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:15.500 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:15.760 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:15.760 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:15.760 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:16.021 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:16.021 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:16.021 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:16.280 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:16.280 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:16.541 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:16.541 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:16.541 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:16.541 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:16.800 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:16.800 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:17.743 No valid GPT data, bailing 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:17.743 00:27:17.743 Discovery Log Number of Records 2, Generation counter 2 00:27:17.743 =====Discovery Log Entry 0====== 00:27:17.743 trtype: tcp 00:27:17.743 adrfam: ipv4 00:27:17.743 subtype: current discovery subsystem 00:27:17.743 treq: not specified, sq flow control disable supported 00:27:17.743 portid: 1 00:27:17.743 trsvcid: 4420 00:27:17.743 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:17.743 traddr: 10.0.0.1 00:27:17.743 eflags: none 00:27:17.743 sectype: none 00:27:17.743 =====Discovery Log Entry 1====== 00:27:17.743 trtype: tcp 00:27:17.743 adrfam: ipv4 00:27:17.743 subtype: nvme subsystem 00:27:17.743 treq: not specified, sq flow control disable supported 00:27:17.743 portid: 1 00:27:17.743 trsvcid: 4420 00:27:17.743 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:17.743 traddr: 10.0.0.1 00:27:17.743 eflags: none 00:27:17.743 sectype: none 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:17.743 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.744 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.005 nvme0n1 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.005 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.006 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.268 nvme0n1 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.268 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.529 nvme0n1 00:27:18.529 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.529 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.529 04:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.529 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.790 nvme0n1 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.790 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.051 nvme0n1 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.051 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.313 nvme0n1 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.313 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.314 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.602 nvme0n1 00:27:19.602 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.602 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.602 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.602 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.602 04:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.602 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.603 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.936 nvme0n1 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.936 nvme0n1 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.936 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.208 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.208 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.208 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.208 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.209 nvme0n1 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.209 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.471 04:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.471 nvme0n1 00:27:20.471 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.471 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.471 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.471 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.471 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.471 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.732 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.732 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.732 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.732 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.733 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.993 nvme0n1 00:27:20.993 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.993 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.993 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.994 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.255 nvme0n1 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.255 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.256 04:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.828 nvme0n1 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.828 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.090 nvme0n1 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.090 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.352 nvme0n1 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.352 04:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.925 nvme0n1 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:22.925 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.926 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.499 nvme0n1 00:27:23.499 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.499 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:23.499 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:23.499 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.499 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.499 04:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:23.499 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.072 nvme0n1 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.072 04:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.644 nvme0n1 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:24.644 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:24.645 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:24.645 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.645 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.216 nvme0n1 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:25.216 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.217 04:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.157 nvme0n1 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.157 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.158 04:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.728 nvme0n1 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.728 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.988 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.988 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.988 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.989 04:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.560 nvme0n1 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.560 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.820 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.391 nvme0n1 00:27:28.391 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.391 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.391 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.391 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.391 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.391 04:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.391 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:28.392 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:28.392 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:28.392 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:28.392 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:28.392 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:28.392 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:28.392 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:28.392 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:28.652 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.653 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.224 nvme0n1 00:27:29.224 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:29.225 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.486 04:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.486 nvme0n1 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:29.486 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.487 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.748 nvme0n1 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:29.748 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.009 nvme0n1 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:30.009 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.010 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.271 nvme0n1 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.271 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.531 nvme0n1 00:27:30.531 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.531 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.531 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.531 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.531 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.531 04:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.531 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.531 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.531 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.531 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.531 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.531 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:30.531 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.532 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.791 nvme0n1 00:27:30.791 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.792 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.053 nvme0n1 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.053 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 nvme0n1 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.314 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.315 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.315 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.315 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.315 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:31.315 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.315 04:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.575 nvme0n1 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.575 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.576 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.837 nvme0n1 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.837 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:31.838 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.838 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:31.838 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:31.838 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:31.838 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:31.838 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.838 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.099 nvme0n1 00:27:32.099 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.099 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.099 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.099 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.099 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.099 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.360 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.360 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.360 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.360 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.360 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.360 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.360 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.361 04:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.622 nvme0n1 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:32.622 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.623 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.884 nvme0n1 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:32.884 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.885 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.145 nvme0n1 00:27:33.145 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.406 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.407 04:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.668 nvme0n1 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.668 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.240 nvme0n1 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.240 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.241 04:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.812 nvme0n1 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.812 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 nvme0n1 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.383 04:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.644 nvme0n1 00:27:35.644 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.644 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:35.644 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:35.644 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.644 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.644 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.905 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.166 nvme0n1 00:27:36.166 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.166 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.166 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.166 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.166 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.428 04:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:36.999 nvme0n1 00:27:36.999 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.999 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:36.999 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:36.999 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.999 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.259 04:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.829 nvme0n1 00:27:37.829 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.830 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.090 04:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.661 nvme0n1 00:27:38.661 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:38.922 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.923 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.923 04:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.494 nvme0n1 00:27:39.494 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.756 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.697 nvme0n1 00:27:40.697 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.697 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.697 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.697 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.697 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.697 04:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:40.697 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.698 nvme0n1 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.698 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.959 nvme0n1 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:40.959 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.220 nvme0n1 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.220 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.481 nvme0n1 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.481 04:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.481 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.743 nvme0n1 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.743 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.004 nvme0n1 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.004 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.265 nvme0n1 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.265 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.266 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.526 nvme0n1 00:27:42.527 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.527 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.527 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.527 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.527 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.527 04:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.527 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.787 nvme0n1 00:27:42.787 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.787 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.787 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:42.788 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.049 nvme0n1 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.049 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.050 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.050 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.050 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.050 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.050 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.050 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.310 nvme0n1 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.310 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:43.311 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:43.311 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.571 04:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.832 nvme0n1 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.832 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.093 nvme0n1 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.093 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.354 nvme0n1 00:27:44.354 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.354 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.354 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.354 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.354 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.354 04:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:44.614 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.615 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.875 nvme0n1 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.875 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:44.876 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.446 nvme0n1 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.446 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:45.447 04:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.018 nvme0n1 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:46.018 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.019 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.588 nvme0n1 00:27:46.588 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.589 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.589 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.589 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.589 04:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:46.589 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.159 nvme0n1 00:27:47.159 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.159 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.159 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.159 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.159 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.159 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.159 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.159 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.159 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.160 04:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.731 nvme0n1 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yzg2ZDg3MmI3ZWEyZWJlZTcyYmU3NzYyY2ZjM2ZmZjKYgnDu: 00:27:47.731 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: ]] 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmQ1NzMxZTdhYTA0MjM2ZThiNTJiOGU5YzdhYjY5NWVkNGMzOTc5NzNlMGVkNzc3ODkxODlhOGE1MGNkN2JmZEcrhLE=: 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:47.732 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.303 nvme0n1 00:27:48.303 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.303 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.303 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.303 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.303 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.564 04:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:48.564 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.134 nvme0n1 00:27:49.134 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.134 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.134 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.134 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.134 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:49.395 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:49.396 04:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.966 nvme0n1 00:27:49.966 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:M2M1Njg4OTE1NWVkNTIzZjA2ZTM3YjM4OGZmNWNmM2IxNWE5YzBlMGVlYTM3M2UxiJ4+QQ==: 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: ]] 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTM1ZjBhNzIyMGJhMWVkY2Q0MTdiNmFmYmM3ODJkM2L9/5Is: 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:50.227 04:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.169 nvme0n1 00:27:51.169 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.169 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.169 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.169 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NDM4YzEyNzVhMzg1ZjY0MDhkYTAxNmMzNTkxZmRiNjg0MjAwNjE1M2FjNGI2OWMyMzU0ZGQwMGM4NmU3YjdhOZekgfc=: 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.170 04:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.741 nvme0n1 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.741 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:51.742 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.003 request: 00:27:52.003 { 00:27:52.003 "name": "nvme0", 00:27:52.003 "trtype": "tcp", 00:27:52.003 "traddr": "10.0.0.1", 00:27:52.003 "adrfam": "ipv4", 00:27:52.003 "trsvcid": "4420", 00:27:52.003 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.003 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.003 "prchk_reftag": false, 00:27:52.003 "prchk_guard": false, 00:27:52.003 "hdgst": false, 00:27:52.003 "ddgst": false, 00:27:52.003 "allow_unrecognized_csi": false, 00:27:52.003 "method": "bdev_nvme_attach_controller", 00:27:52.003 "req_id": 1 00:27:52.003 } 00:27:52.003 Got JSON-RPC error response 00:27:52.003 response: 00:27:52.003 { 00:27:52.003 "code": -5, 00:27:52.003 "message": "Input/output error" 00:27:52.003 } 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:52.003 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.004 request: 00:27:52.004 { 00:27:52.004 "name": "nvme0", 00:27:52.004 "trtype": "tcp", 00:27:52.004 "traddr": "10.0.0.1", 00:27:52.004 "adrfam": "ipv4", 00:27:52.004 "trsvcid": "4420", 00:27:52.004 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.004 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.004 "prchk_reftag": false, 00:27:52.004 "prchk_guard": false, 00:27:52.004 "hdgst": false, 00:27:52.004 "ddgst": false, 00:27:52.004 "dhchap_key": "key2", 00:27:52.004 "allow_unrecognized_csi": false, 00:27:52.004 "method": "bdev_nvme_attach_controller", 00:27:52.004 "req_id": 1 00:27:52.004 } 00:27:52.004 Got JSON-RPC error response 00:27:52.004 response: 00:27:52.004 { 00:27:52.004 "code": -5, 00:27:52.004 "message": "Input/output error" 00:27:52.004 } 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.004 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.264 request: 00:27:52.264 { 00:27:52.264 "name": "nvme0", 00:27:52.264 "trtype": "tcp", 00:27:52.264 "traddr": "10.0.0.1", 00:27:52.264 "adrfam": "ipv4", 00:27:52.264 "trsvcid": "4420", 00:27:52.265 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:52.265 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:52.265 "prchk_reftag": false, 00:27:52.265 "prchk_guard": false, 00:27:52.265 "hdgst": false, 00:27:52.265 "ddgst": false, 00:27:52.265 "dhchap_key": "key1", 00:27:52.265 "dhchap_ctrlr_key": "ckey2", 00:27:52.265 "allow_unrecognized_csi": false, 00:27:52.265 "method": "bdev_nvme_attach_controller", 00:27:52.265 "req_id": 1 00:27:52.265 } 00:27:52.265 Got JSON-RPC error response 00:27:52.265 response: 00:27:52.265 { 00:27:52.265 "code": -5, 00:27:52.265 "message": "Input/output error" 00:27:52.265 } 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.265 nvme0n1 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.265 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.525 04:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.525 request: 00:27:52.525 { 00:27:52.525 "name": "nvme0", 00:27:52.525 "dhchap_key": "key1", 00:27:52.525 "dhchap_ctrlr_key": "ckey2", 00:27:52.525 "method": "bdev_nvme_set_keys", 00:27:52.525 "req_id": 1 00:27:52.525 } 00:27:52.525 Got JSON-RPC error response 00:27:52.525 response: 00:27:52.525 { 00:27:52.525 "code": -13, 00:27:52.525 "message": "Permission denied" 00:27:52.525 } 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:52.525 04:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:53.466 04:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.466 04:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:53.466 04:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:53.466 04:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.466 04:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:53.726 04:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:27:53.726 04:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Mzk0OGFjOWVhNmJjNTBiZDgyYWJkNTBkODM3MjBiZjJhNDhiOGJmNGNiYzFjYjZhtPo6FA==: 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: ]] 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGI4MGU1MTQxMzE4MTBjZjFlNGVjMmNhZDdlZmY4N2I1ODU4NWVkMzM2NTI4MmYw42rO0w==: 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.667 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.927 nvme0n1 00:27:54.927 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.927 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ODlhNDIzZGUzYzc1MGJjOTA2NDhhNzcyODg5Y2RiZjIGuj81: 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: ]] 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MjlmNWE4NWJkNzgzNzYxMDBjOTQ5NmRiM2IwZmNlYzX59l2f: 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.928 request: 00:27:54.928 { 00:27:54.928 "name": "nvme0", 00:27:54.928 "dhchap_key": "key2", 00:27:54.928 "dhchap_ctrlr_key": "ckey1", 00:27:54.928 "method": "bdev_nvme_set_keys", 00:27:54.928 "req_id": 1 00:27:54.928 } 00:27:54.928 Got JSON-RPC error response 00:27:54.928 response: 00:27:54.928 { 00:27:54.928 "code": -13, 00:27:54.928 "message": "Permission denied" 00:27:54.928 } 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:27:54.928 04:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:55.869 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:55.869 rmmod nvme_tcp 00:27:56.128 rmmod nvme_fabrics 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 3141775 ']' 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 3141775 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 3141775 ']' 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 3141775 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3141775 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3141775' 00:27:56.128 killing process with pid 3141775 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 3141775 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 3141775 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:56.128 04:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:58.731 04:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:02.071 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.071 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:02.333 04:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.pch /tmp/spdk.key-null.qNQ /tmp/spdk.key-sha256.E8t /tmp/spdk.key-sha384.XWu /tmp/spdk.key-sha512.6qB /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:02.333 04:39:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:05.638 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:28:05.638 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:28:05.638 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:28:06.215 00:28:06.215 real 1m3.067s 00:28:06.215 user 0m57.211s 00:28:06.215 sys 0m15.484s 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.215 ************************************ 00:28:06.215 END TEST nvmf_auth_host 00:28:06.215 ************************************ 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.215 ************************************ 00:28:06.215 START TEST nvmf_digest 00:28:06.215 ************************************ 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:06.215 * Looking for test storage... 00:28:06.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:06.215 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:06.476 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:06.476 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:06.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.476 --rc genhtml_branch_coverage=1 00:28:06.476 --rc genhtml_function_coverage=1 00:28:06.476 --rc genhtml_legend=1 00:28:06.476 --rc geninfo_all_blocks=1 00:28:06.476 --rc geninfo_unexecuted_blocks=1 00:28:06.476 00:28:06.476 ' 00:28:06.476 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:06.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.476 --rc genhtml_branch_coverage=1 00:28:06.476 --rc genhtml_function_coverage=1 00:28:06.476 --rc genhtml_legend=1 00:28:06.476 --rc geninfo_all_blocks=1 00:28:06.476 --rc geninfo_unexecuted_blocks=1 00:28:06.476 00:28:06.476 ' 00:28:06.476 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:06.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.476 --rc genhtml_branch_coverage=1 00:28:06.476 --rc genhtml_function_coverage=1 00:28:06.476 --rc genhtml_legend=1 00:28:06.476 --rc geninfo_all_blocks=1 00:28:06.476 --rc geninfo_unexecuted_blocks=1 00:28:06.477 00:28:06.477 ' 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:06.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:06.477 --rc genhtml_branch_coverage=1 00:28:06.477 --rc genhtml_function_coverage=1 00:28:06.477 --rc genhtml_legend=1 00:28:06.477 --rc geninfo_all_blocks=1 00:28:06.477 --rc geninfo_unexecuted_blocks=1 00:28:06.477 00:28:06.477 ' 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:06.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:06.477 04:39:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:14.615 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:14.615 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:14.615 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:14.615 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:14.615 04:39:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:14.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:14.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:28:14.616 00:28:14.616 --- 10.0.0.2 ping statistics --- 00:28:14.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.616 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:14.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:14.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.293 ms 00:28:14.616 00:28:14.616 --- 10.0.0.1 ping statistics --- 00:28:14.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:14.616 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:14.616 ************************************ 00:28:14.616 START TEST nvmf_digest_clean 00:28:14.616 ************************************ 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=3159967 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 3159967 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3159967 ']' 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:14.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:14.616 04:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.616 [2024-11-05 04:39:27.411630] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:14.616 [2024-11-05 04:39:27.411693] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:14.616 [2024-11-05 04:39:27.494636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:14.616 [2024-11-05 04:39:27.534630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:14.616 [2024-11-05 04:39:27.534668] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:14.616 [2024-11-05 04:39:27.534676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:14.616 [2024-11-05 04:39:27.534682] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:14.616 [2024-11-05 04:39:27.534688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:14.616 [2024-11-05 04:39:27.535314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.616 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.876 null0 00:28:14.876 [2024-11-05 04:39:28.319593] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.876 [2024-11-05 04:39:28.343802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3160023 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3160023 /var/tmp/bperf.sock 00:28:14.876 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3160023 ']' 00:28:14.877 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:14.877 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:14.877 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:14.877 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:14.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:14.877 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:14.877 04:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:14.877 [2024-11-05 04:39:28.399337] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:14.877 [2024-11-05 04:39:28.399383] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160023 ] 00:28:14.877 [2024-11-05 04:39:28.488139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.137 [2024-11-05 04:39:28.523993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.707 04:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:15.707 04:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:15.707 04:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:15.707 04:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:15.707 04:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:15.968 04:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:15.968 04:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:16.227 nvme0n1 00:28:16.227 04:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:16.227 04:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:16.487 Running I/O for 2 seconds... 00:28:18.367 19391.00 IOPS, 75.75 MiB/s [2024-11-05T03:39:32.007Z] 19722.00 IOPS, 77.04 MiB/s 00:28:18.367 Latency(us) 00:28:18.367 [2024-11-05T03:39:32.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.367 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:18.367 nvme0n1 : 2.00 19744.01 77.13 0.00 0.00 6476.72 1966.08 15400.96 00:28:18.367 [2024-11-05T03:39:32.007Z] =================================================================================================================== 00:28:18.367 [2024-11-05T03:39:32.007Z] Total : 19744.01 77.13 0.00 0.00 6476.72 1966.08 15400.96 00:28:18.367 { 00:28:18.367 "results": [ 00:28:18.367 { 00:28:18.367 "job": "nvme0n1", 00:28:18.367 "core_mask": "0x2", 00:28:18.367 "workload": "randread", 00:28:18.367 "status": "finished", 00:28:18.367 "queue_depth": 128, 00:28:18.367 "io_size": 4096, 00:28:18.367 "runtime": 2.004253, 00:28:18.367 "iops": 19744.01435347733, 00:28:18.367 "mibps": 77.12505606827082, 00:28:18.367 "io_failed": 0, 00:28:18.367 "io_timeout": 0, 00:28:18.367 "avg_latency_us": 6476.724957040332, 00:28:18.367 "min_latency_us": 1966.08, 00:28:18.367 "max_latency_us": 15400.96 00:28:18.367 } 00:28:18.367 ], 00:28:18.367 "core_count": 1 00:28:18.367 } 00:28:18.367 04:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:18.367 04:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:18.367 04:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:18.367 04:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:18.367 | select(.opcode=="crc32c") 00:28:18.367 | "\(.module_name) \(.executed)"' 00:28:18.367 04:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:18.627 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:18.627 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:18.627 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:18.627 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:18.627 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3160023 00:28:18.627 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3160023 ']' 00:28:18.627 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3160023 00:28:18.628 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:18.628 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:18.628 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3160023 00:28:18.628 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:18.628 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:18.628 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3160023' 00:28:18.628 killing process with pid 3160023 00:28:18.628 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3160023 00:28:18.628 Received shutdown signal, test time was about 2.000000 seconds 00:28:18.628 00:28:18.628 Latency(us) 00:28:18.628 [2024-11-05T03:39:32.268Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:18.628 [2024-11-05T03:39:32.268Z] =================================================================================================================== 00:28:18.628 [2024-11-05T03:39:32.268Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:18.628 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3160023 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3160844 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3160844 /var/tmp/bperf.sock 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3160844 ']' 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:18.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:18.888 04:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:18.888 [2024-11-05 04:39:32.328836] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:18.888 [2024-11-05 04:39:32.328896] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3160844 ] 00:28:18.888 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:18.888 Zero copy mechanism will not be used. 00:28:18.888 [2024-11-05 04:39:32.411520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.888 [2024-11-05 04:39:32.440852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.828 04:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:19.828 04:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:19.829 04:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:19.829 04:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:19.829 04:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:19.829 04:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:19.829 04:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:20.088 nvme0n1 00:28:20.088 04:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:20.088 04:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:20.088 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:20.088 Zero copy mechanism will not be used. 00:28:20.088 Running I/O for 2 seconds... 00:28:22.422 3079.00 IOPS, 384.88 MiB/s [2024-11-05T03:39:36.062Z] 3503.50 IOPS, 437.94 MiB/s 00:28:22.422 Latency(us) 00:28:22.422 [2024-11-05T03:39:36.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.422 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:22.422 nvme0n1 : 2.00 3506.41 438.30 0.00 0.00 4561.13 1058.13 10813.44 00:28:22.422 [2024-11-05T03:39:36.062Z] =================================================================================================================== 00:28:22.422 [2024-11-05T03:39:36.062Z] Total : 3506.41 438.30 0.00 0.00 4561.13 1058.13 10813.44 00:28:22.422 { 00:28:22.422 "results": [ 00:28:22.422 { 00:28:22.422 "job": "nvme0n1", 00:28:22.422 "core_mask": "0x2", 00:28:22.422 "workload": "randread", 00:28:22.422 "status": "finished", 00:28:22.422 "queue_depth": 16, 00:28:22.422 "io_size": 131072, 00:28:22.422 "runtime": 2.002905, 00:28:22.422 "iops": 3506.406943913965, 00:28:22.422 "mibps": 438.3008679892456, 00:28:22.422 "io_failed": 0, 00:28:22.422 "io_timeout": 0, 00:28:22.422 "avg_latency_us": 4561.1299710475105, 00:28:22.422 "min_latency_us": 1058.1333333333334, 00:28:22.422 "max_latency_us": 10813.44 00:28:22.422 } 00:28:22.422 ], 00:28:22.422 "core_count": 1 00:28:22.422 } 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:22.422 | select(.opcode=="crc32c") 00:28:22.422 | "\(.module_name) \(.executed)"' 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3160844 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3160844 ']' 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3160844 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3160844 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3160844' 00:28:22.422 killing process with pid 3160844 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3160844 00:28:22.422 Received shutdown signal, test time was about 2.000000 seconds 00:28:22.422 00:28:22.422 Latency(us) 00:28:22.422 [2024-11-05T03:39:36.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.422 [2024-11-05T03:39:36.062Z] =================================================================================================================== 00:28:22.422 [2024-11-05T03:39:36.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.422 04:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3160844 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3161625 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3161625 /var/tmp/bperf.sock 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3161625 ']' 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:22.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:22.422 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:22.682 [2024-11-05 04:39:36.096533] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:22.683 [2024-11-05 04:39:36.096592] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3161625 ] 00:28:22.683 [2024-11-05 04:39:36.179571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.683 [2024-11-05 04:39:36.209008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.253 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:23.253 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:23.253 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:23.253 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:23.253 04:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:23.513 04:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:23.513 04:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:24.084 nvme0n1 00:28:24.084 04:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:24.084 04:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:24.084 Running I/O for 2 seconds... 00:28:25.963 21534.00 IOPS, 84.12 MiB/s [2024-11-05T03:39:39.603Z] 21559.00 IOPS, 84.21 MiB/s 00:28:25.963 Latency(us) 00:28:25.963 [2024-11-05T03:39:39.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:25.963 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:25.963 nvme0n1 : 2.00 21593.63 84.35 0.00 0.00 5921.98 2266.45 10431.15 00:28:25.963 [2024-11-05T03:39:39.603Z] =================================================================================================================== 00:28:25.963 [2024-11-05T03:39:39.603Z] Total : 21593.63 84.35 0.00 0.00 5921.98 2266.45 10431.15 00:28:25.963 { 00:28:25.963 "results": [ 00:28:25.963 { 00:28:25.963 "job": "nvme0n1", 00:28:25.963 "core_mask": "0x2", 00:28:25.963 "workload": "randwrite", 00:28:25.963 "status": "finished", 00:28:25.963 "queue_depth": 128, 00:28:25.963 "io_size": 4096, 00:28:25.963 "runtime": 2.00272, 00:28:25.963 "iops": 21593.632659582967, 00:28:25.963 "mibps": 84.35012757649596, 00:28:25.963 "io_failed": 0, 00:28:25.963 "io_timeout": 0, 00:28:25.963 "avg_latency_us": 5921.98336956019, 00:28:25.963 "min_latency_us": 2266.4533333333334, 00:28:25.963 "max_latency_us": 10431.146666666667 00:28:25.963 } 00:28:25.963 ], 00:28:25.963 "core_count": 1 00:28:25.963 } 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:26.223 | select(.opcode=="crc32c") 00:28:26.223 | "\(.module_name) \(.executed)"' 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3161625 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3161625 ']' 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3161625 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:26.223 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3161625 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3161625' 00:28:26.483 killing process with pid 3161625 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3161625 00:28:26.483 Received shutdown signal, test time was about 2.000000 seconds 00:28:26.483 00:28:26.483 Latency(us) 00:28:26.483 [2024-11-05T03:39:40.123Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:26.483 [2024-11-05T03:39:40.123Z] =================================================================================================================== 00:28:26.483 [2024-11-05T03:39:40.123Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3161625 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3162393 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3162393 /var/tmp/bperf.sock 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 3162393 ']' 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:26.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:26.483 04:39:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:26.483 [2024-11-05 04:39:40.020085] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:26.483 [2024-11-05 04:39:40.020146] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3162393 ] 00:28:26.483 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:26.483 Zero copy mechanism will not be used. 00:28:26.483 [2024-11-05 04:39:40.107145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:26.743 [2024-11-05 04:39:40.137371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.313 04:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:27.313 04:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:27.313 04:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:27.313 04:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:27.314 04:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:27.574 04:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.574 04:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:27.835 nvme0n1 00:28:27.835 04:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:27.835 04:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:28.095 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:28.095 Zero copy mechanism will not be used. 00:28:28.095 Running I/O for 2 seconds... 00:28:29.975 3413.00 IOPS, 426.62 MiB/s [2024-11-05T03:39:43.615Z] 4225.00 IOPS, 528.12 MiB/s 00:28:29.975 Latency(us) 00:28:29.975 [2024-11-05T03:39:43.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:29.975 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:29.975 nvme0n1 : 2.00 4225.50 528.19 0.00 0.00 3781.38 1426.77 15182.51 00:28:29.975 [2024-11-05T03:39:43.615Z] =================================================================================================================== 00:28:29.975 [2024-11-05T03:39:43.616Z] Total : 4225.50 528.19 0.00 0.00 3781.38 1426.77 15182.51 00:28:29.976 { 00:28:29.976 "results": [ 00:28:29.976 { 00:28:29.976 "job": "nvme0n1", 00:28:29.976 "core_mask": "0x2", 00:28:29.976 "workload": "randwrite", 00:28:29.976 "status": "finished", 00:28:29.976 "queue_depth": 16, 00:28:29.976 "io_size": 131072, 00:28:29.976 "runtime": 2.003548, 00:28:29.976 "iops": 4225.503955982088, 00:28:29.976 "mibps": 528.187994497761, 00:28:29.976 "io_failed": 0, 00:28:29.976 "io_timeout": 0, 00:28:29.976 "avg_latency_us": 3781.3830758327426, 00:28:29.976 "min_latency_us": 1426.7733333333333, 00:28:29.976 "max_latency_us": 15182.506666666666 00:28:29.976 } 00:28:29.976 ], 00:28:29.976 "core_count": 1 00:28:29.976 } 00:28:29.976 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:29.976 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:29.976 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:29.976 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:29.976 | select(.opcode=="crc32c") 00:28:29.976 | "\(.module_name) \(.executed)"' 00:28:29.976 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3162393 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3162393 ']' 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3162393 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3162393 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3162393' 00:28:30.236 killing process with pid 3162393 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3162393 00:28:30.236 Received shutdown signal, test time was about 2.000000 seconds 00:28:30.236 00:28:30.236 Latency(us) 00:28:30.236 [2024-11-05T03:39:43.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:30.236 [2024-11-05T03:39:43.876Z] =================================================================================================================== 00:28:30.236 [2024-11-05T03:39:43.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:30.236 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3162393 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3159967 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 3159967 ']' 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 3159967 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3159967 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3159967' 00:28:30.496 killing process with pid 3159967 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 3159967 00:28:30.496 04:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 3159967 00:28:30.496 00:28:30.496 real 0m16.747s 00:28:30.496 user 0m33.293s 00:28:30.496 sys 0m3.476s 00:28:30.496 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:30.496 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:30.496 ************************************ 00:28:30.496 END TEST nvmf_digest_clean 00:28:30.496 ************************************ 00:28:30.496 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:30.496 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:30.496 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:30.496 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:30.757 ************************************ 00:28:30.757 START TEST nvmf_digest_error 00:28:30.757 ************************************ 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=3163105 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 3163105 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3163105 ']' 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:30.757 04:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:30.757 [2024-11-05 04:39:44.231841] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:30.757 [2024-11-05 04:39:44.231901] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:30.757 [2024-11-05 04:39:44.313144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.757 [2024-11-05 04:39:44.353419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:30.757 [2024-11-05 04:39:44.353459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:30.757 [2024-11-05 04:39:44.353467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:30.757 [2024-11-05 04:39:44.353474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:30.757 [2024-11-05 04:39:44.353479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:30.757 [2024-11-05 04:39:44.354087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.698 [2024-11-05 04:39:45.068142] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.698 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.698 null0 00:28:31.699 [2024-11-05 04:39:45.150376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.699 [2024-11-05 04:39:45.174585] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3163451 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3163451 /var/tmp/bperf.sock 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3163451 ']' 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:31.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:31.699 04:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:31.699 [2024-11-05 04:39:45.232713] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:31.699 [2024-11-05 04:39:45.232766] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3163451 ] 00:28:31.699 [2024-11-05 04:39:45.314944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.959 [2024-11-05 04:39:45.344742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.529 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:32.529 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:32.529 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:32.529 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:32.790 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:32.790 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.790 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:32.790 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.790 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:32.790 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.050 nvme0n1 00:28:33.050 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:33.050 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.050 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:33.050 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.050 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:33.050 04:39:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:33.311 Running I/O for 2 seconds... 00:28:33.311 [2024-11-05 04:39:46.732126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.732158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.732168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.746805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.746825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.746833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.758696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.758714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.758721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.772798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.772816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.772823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.782732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.782756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.782763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.796602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.796621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.796627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.810294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.810311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.810318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.823064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.823082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.823088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.833528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.833546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.833552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.846829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.846846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.846853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.859064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.859081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.859087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.871974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.871991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.871998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.885856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.885872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.885879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.897578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.311 [2024-11-05 04:39:46.897595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.311 [2024-11-05 04:39:46.897605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.311 [2024-11-05 04:39:46.909134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.312 [2024-11-05 04:39:46.909151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.312 [2024-11-05 04:39:46.909157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.312 [2024-11-05 04:39:46.921642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.312 [2024-11-05 04:39:46.921658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.312 [2024-11-05 04:39:46.921665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.312 [2024-11-05 04:39:46.934622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.312 [2024-11-05 04:39:46.934639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.312 [2024-11-05 04:39:46.934645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.312 [2024-11-05 04:39:46.947708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.312 [2024-11-05 04:39:46.947726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.312 [2024-11-05 04:39:46.947732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:46.960755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:46.960773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:46.960779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:46.973131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:46.973148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:1196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:46.973155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:46.986215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:46.986231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:82 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:46.986238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:46.997806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:46.997823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:12721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:46.997829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.008427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.008448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:9258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.008454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.022867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.022884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.022891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.035190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.035208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.035214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.048089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.048106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.048113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.060106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.060122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.060129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.072842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.072859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.072865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.085664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.085680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.085686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.096778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.096795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.096802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.111183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.111200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:4091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.111206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.123024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.123041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.123047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.136265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.136282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.136289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.146878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.146895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.146901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.160015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.160033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:11927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.160039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.172719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.172736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.172742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.186039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.186057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.186063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.199278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.199296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.199302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.573 [2024-11-05 04:39:47.210134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.573 [2024-11-05 04:39:47.210151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.573 [2024-11-05 04:39:47.210157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.223919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.223936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.223945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.236362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.236379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.236385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.250288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.250307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.250313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.263649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.263667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:6274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.263673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.276828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.276845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.276851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.288843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.288860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.288866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.301484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.301501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.301507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.313784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.313800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.313807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.327305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.327322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:5578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.327328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.340023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.340043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.340049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.351267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.351284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.351290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.363361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.363378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.363385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.377239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.377257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.377263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.390457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.390474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.390480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.403452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.403469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.403475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.416388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.416405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.416411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.428169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.428186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.428192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.439865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.439882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.439891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.452599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.452616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.452622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:33.834 [2024-11-05 04:39:47.465820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:33.834 [2024-11-05 04:39:47.465836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:33.834 [2024-11-05 04:39:47.465843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.477278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.477296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.477303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.490428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.490445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.490451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.502016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.502033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.502039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.514778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.514795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.514802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.526199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.526216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.526223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.540083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.540100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:24506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.540106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.552910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.552930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.552937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.565797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.565813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.565819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.576257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.576274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.576280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.589111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.589128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.589135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.602316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.602333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.602339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.615833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.615851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.615857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.629128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.095 [2024-11-05 04:39:47.629144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.095 [2024-11-05 04:39:47.629150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.095 [2024-11-05 04:39:47.641107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.096 [2024-11-05 04:39:47.641124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.096 [2024-11-05 04:39:47.641130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.096 [2024-11-05 04:39:47.652917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.096 [2024-11-05 04:39:47.652934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.096 [2024-11-05 04:39:47.652940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.096 [2024-11-05 04:39:47.665558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.096 [2024-11-05 04:39:47.665575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.096 [2024-11-05 04:39:47.665582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.096 [2024-11-05 04:39:47.678763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.096 [2024-11-05 04:39:47.678780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.096 [2024-11-05 04:39:47.678786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.096 [2024-11-05 04:39:47.689353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.096 [2024-11-05 04:39:47.689369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.096 [2024-11-05 04:39:47.689376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.096 [2024-11-05 04:39:47.701380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.096 [2024-11-05 04:39:47.701397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.096 [2024-11-05 04:39:47.701404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.096 [2024-11-05 04:39:47.715894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.096 [2024-11-05 04:39:47.715911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.096 [2024-11-05 04:39:47.715917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.096 20155.00 IOPS, 78.73 MiB/s [2024-11-05T03:39:47.736Z] [2024-11-05 04:39:47.731517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.096 [2024-11-05 04:39:47.731533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.096 [2024-11-05 04:39:47.731539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.744369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.744386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.744393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.755308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.755324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.755330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.768780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.768797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.768807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.781634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.781651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.781657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.795661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.795678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.795684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.806469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.806486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:11346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.806492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.818823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.818840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.818846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.832099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.832116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.832122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.845689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.845706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:8739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.845713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.856843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.856860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.856866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.868625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.868642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.868649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.882910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.882927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.882933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.895324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.895341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.895348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.905662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.905679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.905685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.919212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.919230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:3590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.919236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.932836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.932853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.932859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.945931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.945948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.945954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.957680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.957697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.957703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.968600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.968617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.968623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.357 [2024-11-05 04:39:47.981558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.357 [2024-11-05 04:39:47.981576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.357 [2024-11-05 04:39:47.981585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:47.995786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:47.995803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:47.995809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.007450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.007467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.007473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.022572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.022589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.022595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.036997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.037014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.037021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.047500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.047517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.047523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.060481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.060499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.060505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.073541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.073558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.073565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.086499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.086516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.086522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.099004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.099023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.099029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.112307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.112326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.112332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.122927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.122945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.122951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.136090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.136107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.136113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.149500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.149517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.149524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.162874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.162891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.162897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.173534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.173551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.173558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.186268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.186285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.186292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.199257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.199275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.199282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.210948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.210965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.210972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.224133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.224150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.224157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.236974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.236991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:23071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.619 [2024-11-05 04:39:48.250045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.619 [2024-11-05 04:39:48.250063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.619 [2024-11-05 04:39:48.250069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.880 [2024-11-05 04:39:48.261388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.880 [2024-11-05 04:39:48.261405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.880 [2024-11-05 04:39:48.261412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.880 [2024-11-05 04:39:48.275763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.880 [2024-11-05 04:39:48.275781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.880 [2024-11-05 04:39:48.275787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.880 [2024-11-05 04:39:48.289000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.880 [2024-11-05 04:39:48.289017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.880 [2024-11-05 04:39:48.289023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.880 [2024-11-05 04:39:48.300329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.880 [2024-11-05 04:39:48.300346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:15839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.880 [2024-11-05 04:39:48.300353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.880 [2024-11-05 04:39:48.313933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.880 [2024-11-05 04:39:48.313951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.880 [2024-11-05 04:39:48.313960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.880 [2024-11-05 04:39:48.326446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.880 [2024-11-05 04:39:48.326464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.880 [2024-11-05 04:39:48.326470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.880 [2024-11-05 04:39:48.337865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.880 [2024-11-05 04:39:48.337882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.880 [2024-11-05 04:39:48.337888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.880 [2024-11-05 04:39:48.351133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.880 [2024-11-05 04:39:48.351150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.351157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.364264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.364281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.364287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.376355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.376372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.376378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.388556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.388573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:14456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.388579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.403495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.403512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.403518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.416695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.416712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:1950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.416719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.427478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.427500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.427507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.439041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.439057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.439063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.452448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.452466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.452472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.465776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.465793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.465799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.479604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.479621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:23209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.479627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.491717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.491734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.491740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.501824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.501841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.501848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:34.881 [2024-11-05 04:39:48.515288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:34.881 [2024-11-05 04:39:48.515306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:34.881 [2024-11-05 04:39:48.515313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.528588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.528605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.528615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.540768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.540785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.540791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.552498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.552515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.552521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.566457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.566475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.566481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.578956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.578972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.578979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.589788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.589805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.589812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.603659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.603676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.603683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.616762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.616779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.616785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.628629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.628646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.628652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.639153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.639173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.639179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.653277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.653294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.653300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.666621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.666638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.666645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.679303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.679320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.679326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.691983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.692000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.692006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.702015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.702031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.702037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 [2024-11-05 04:39:48.715084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e253b0) 00:28:35.142 [2024-11-05 04:39:48.715101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:35.142 [2024-11-05 04:39:48.715107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:35.142 20170.00 IOPS, 78.79 MiB/s 00:28:35.142 Latency(us) 00:28:35.142 [2024-11-05T03:39:48.782Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.142 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:35.142 nvme0n1 : 2.01 20196.74 78.89 0.00 0.00 6329.59 2266.45 16602.45 00:28:35.142 [2024-11-05T03:39:48.782Z] =================================================================================================================== 00:28:35.142 [2024-11-05T03:39:48.782Z] Total : 20196.74 78.89 0.00 0.00 6329.59 2266.45 16602.45 00:28:35.142 { 00:28:35.142 "results": [ 00:28:35.142 { 00:28:35.142 "job": "nvme0n1", 00:28:35.142 "core_mask": "0x2", 00:28:35.142 "workload": "randread", 00:28:35.142 "status": "finished", 00:28:35.142 "queue_depth": 128, 00:28:35.142 "io_size": 4096, 00:28:35.142 "runtime": 2.00676, 00:28:35.142 "iops": 20196.73503557974, 00:28:35.142 "mibps": 78.89349623273336, 00:28:35.142 "io_failed": 0, 00:28:35.142 "io_timeout": 0, 00:28:35.142 "avg_latency_us": 6329.594041615264, 00:28:35.142 "min_latency_us": 2266.4533333333334, 00:28:35.142 "max_latency_us": 16602.453333333335 00:28:35.142 } 00:28:35.142 ], 00:28:35.142 "core_count": 1 00:28:35.142 } 00:28:35.142 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:35.142 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:35.142 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:35.142 | .driver_specific 00:28:35.142 | .nvme_error 00:28:35.142 | .status_code 00:28:35.142 | .command_transient_transport_error' 00:28:35.142 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 158 > 0 )) 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3163451 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3163451 ']' 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3163451 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3163451 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3163451' 00:28:35.403 killing process with pid 3163451 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3163451 00:28:35.403 Received shutdown signal, test time was about 2.000000 seconds 00:28:35.403 00:28:35.403 Latency(us) 00:28:35.403 [2024-11-05T03:39:49.043Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:35.403 [2024-11-05T03:39:49.043Z] =================================================================================================================== 00:28:35.403 [2024-11-05T03:39:49.043Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:35.403 04:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3163451 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3164138 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3164138 /var/tmp/bperf.sock 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3164138 ']' 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:35.663 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:35.663 [2024-11-05 04:39:49.143465] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:35.663 [2024-11-05 04:39:49.143520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164138 ] 00:28:35.663 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.663 Zero copy mechanism will not be used. 00:28:35.663 [2024-11-05 04:39:49.225144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.663 [2024-11-05 04:39:49.253589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.603 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:36.603 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:36.603 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.603 04:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:36.603 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:36.603 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.603 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.603 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.603 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.603 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:36.864 nvme0n1 00:28:36.864 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:36.864 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.864 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:36.864 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.864 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:36.864 04:39:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:36.864 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.864 Zero copy mechanism will not be used. 00:28:36.864 Running I/O for 2 seconds... 00:28:36.864 [2024-11-05 04:39:50.457222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:36.864 [2024-11-05 04:39:50.457259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.864 [2024-11-05 04:39:50.457269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.864 [2024-11-05 04:39:50.463139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:36.864 [2024-11-05 04:39:50.463161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.864 [2024-11-05 04:39:50.463168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:36.864 [2024-11-05 04:39:50.470413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:36.864 [2024-11-05 04:39:50.470433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.864 [2024-11-05 04:39:50.470440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:36.864 [2024-11-05 04:39:50.480751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:36.864 [2024-11-05 04:39:50.480770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.864 [2024-11-05 04:39:50.480777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:36.864 [2024-11-05 04:39:50.487542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:36.864 [2024-11-05 04:39:50.487560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.864 [2024-11-05 04:39:50.487567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:36.864 [2024-11-05 04:39:50.496394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:36.864 [2024-11-05 04:39:50.496413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:36.864 [2024-11-05 04:39:50.496419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.505072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.505091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.505098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.515573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.515591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.515598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.526313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.526331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.526338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.533923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.533941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.533951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.545030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.545049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.545055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.555564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.555583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.555589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.567207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.567226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.567232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.579126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.579145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.579151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.589976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.589995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.590002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.600173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.600192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.600198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.608750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.608769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.608775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.617854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.617872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.617878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.628879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.628898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.125 [2024-11-05 04:39:50.628904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.125 [2024-11-05 04:39:50.640521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.125 [2024-11-05 04:39:50.640540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.640546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.652009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.652028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.652034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.664002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.664020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.664026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.674084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.674102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.674108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.684194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.684212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.684219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.689879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.689897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.689903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.695190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.695208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.695214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.700898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.700916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.700926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.706885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.706903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.706909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.712741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.712762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.712769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.718143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.718161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.718167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.723645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.723662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.723668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.729175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.729193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.729199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.734532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.734550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.734556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.740763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.740781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.740788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.744229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.744246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.744252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.749669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.749690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.749696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.126 [2024-11-05 04:39:50.757589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.126 [2024-11-05 04:39:50.757607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.126 [2024-11-05 04:39:50.757613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.763065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.763082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.763089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.770153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.770170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.770177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.777351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.777368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.777375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.783261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.783278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.783284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.793884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.793901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.793907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.804654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.804671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.804677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.816308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.816325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.816331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.827106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.827124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.827130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.839507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.839525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.839531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.848059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.848077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.848083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.854342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.854359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.387 [2024-11-05 04:39:50.854365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.387 [2024-11-05 04:39:50.861062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.387 [2024-11-05 04:39:50.861079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.861085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.867910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.867928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.867934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.876709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.876726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.876732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.886338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.886355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.886362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.895764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.895782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.895791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.903750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.903767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.903774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.913104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.913122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.913128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.920131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.920149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.920155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.928706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.928723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.928730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.934027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.934044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.934050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.943371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.943388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.943395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.951080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.951097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.951103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.959573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.959591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.959597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.966496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.966516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.966522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.975904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.975922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.975928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.987165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.987183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.987189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:50.999041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:50.999059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:50.999065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:51.008554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:51.008572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:51.008579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.388 [2024-11-05 04:39:51.020547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.388 [2024-11-05 04:39:51.020566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.388 [2024-11-05 04:39:51.020572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.649 [2024-11-05 04:39:51.027406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.649 [2024-11-05 04:39:51.027423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.649 [2024-11-05 04:39:51.027429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.649 [2024-11-05 04:39:51.035048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.649 [2024-11-05 04:39:51.035067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.649 [2024-11-05 04:39:51.035073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.041344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.041363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.041369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.051813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.051832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.051838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.060955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.060974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.060980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.068064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.068081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.068087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.076671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.076689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.076695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.082156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.082174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.082180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.090172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.090190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.090196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.101180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.101199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.101205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.113192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.113210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.113216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.118793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.118811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.118820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.126485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.126503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.126509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.133203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.133222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.133228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.140759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.140777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.140783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.150717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.150736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.150742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.159806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.159824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.159830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.165234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.165252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.165258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.171255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.171273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.171280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.182146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.182164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.182170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.193085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.193107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.193114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.202504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.202522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.202529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.212825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.212843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.212849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.222193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.222211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.222217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.229018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.229036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.229042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.238818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.238835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.238841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.245912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.245930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.245936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.255802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.255821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.650 [2024-11-05 04:39:51.255827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.650 [2024-11-05 04:39:51.265853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.650 [2024-11-05 04:39:51.265871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.651 [2024-11-05 04:39:51.265877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.651 [2024-11-05 04:39:51.273255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.651 [2024-11-05 04:39:51.273273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.651 [2024-11-05 04:39:51.273279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.651 [2024-11-05 04:39:51.282232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.651 [2024-11-05 04:39:51.282251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.651 [2024-11-05 04:39:51.282257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.290218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.290237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-11-05 04:39:51.290243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.296874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.296893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-11-05 04:39:51.296899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.305533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.305550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-11-05 04:39:51.305556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.314170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.314188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-11-05 04:39:51.314194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.323096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.323115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-11-05 04:39:51.323121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.329741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.329763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-11-05 04:39:51.329770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.335060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.335078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-11-05 04:39:51.335087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.344135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.344154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-11-05 04:39:51.344160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.355055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.355073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.912 [2024-11-05 04:39:51.355079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.912 [2024-11-05 04:39:51.362965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.912 [2024-11-05 04:39:51.362983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.362989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.370274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.370292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.370298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.379151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.379168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.379175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.388042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.388060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.388066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.396388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.396407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.396413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.404964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.404983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.404989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.411409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.411430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.411437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.417253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.417272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.417278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.426298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.426317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.426323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.436587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.436605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.436611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.445023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.445041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.445047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.913 3660.00 IOPS, 457.50 MiB/s [2024-11-05T03:39:51.553Z] [2024-11-05 04:39:51.451822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.451840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.451846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.457028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.457046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.457052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.466992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.467011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.467017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.476983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.477001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.477011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.489743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.489766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.489773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.502251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.502270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.502276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.512911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.512930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.512936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.522516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.522535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.522541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.533521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.533540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.533547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:37.913 [2024-11-05 04:39:51.541874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:37.913 [2024-11-05 04:39:51.541892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:37.913 [2024-11-05 04:39:51.541898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.174 [2024-11-05 04:39:51.549971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.174 [2024-11-05 04:39:51.549990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.174 [2024-11-05 04:39:51.549998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.174 [2024-11-05 04:39:51.559798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.174 [2024-11-05 04:39:51.559817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.174 [2024-11-05 04:39:51.559823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.174 [2024-11-05 04:39:51.569008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.174 [2024-11-05 04:39:51.569033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.174 [2024-11-05 04:39:51.569039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.174 [2024-11-05 04:39:51.579946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.174 [2024-11-05 04:39:51.579964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.174 [2024-11-05 04:39:51.579970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.174 [2024-11-05 04:39:51.589952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.174 [2024-11-05 04:39:51.589970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.174 [2024-11-05 04:39:51.589976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.174 [2024-11-05 04:39:51.600914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.174 [2024-11-05 04:39:51.600931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.174 [2024-11-05 04:39:51.600938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.174 [2024-11-05 04:39:51.610484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.174 [2024-11-05 04:39:51.610502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.174 [2024-11-05 04:39:51.610508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.174 [2024-11-05 04:39:51.615775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.174 [2024-11-05 04:39:51.615793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.174 [2024-11-05 04:39:51.615799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.174 [2024-11-05 04:39:51.624000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.624019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.624025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.634534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.634553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.634559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.644429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.644447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.644453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.655568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.655587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.655594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.661892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.661911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.661917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.673993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.674012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.674018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.684247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.684266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.684272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.696032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.696051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.696057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.706055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.706073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.706080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.717643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.717662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.717668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.728556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.728575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.728582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.739358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.739378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.739387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.749717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.749736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.749742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.760863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.760881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.760887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.772227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.772245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.772251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.784471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.784491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.784497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.796783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.796801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.796807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.175 [2024-11-05 04:39:51.807831] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.175 [2024-11-05 04:39:51.807850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.175 [2024-11-05 04:39:51.807856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.818854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.818872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.818878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.828973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.828992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.828998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.839161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.839183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.839189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.847479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.847497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.847504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.859255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.859273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.859279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.872128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.872147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.872153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.885104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.885122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.885128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.898332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.898350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.898356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.911309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.911326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.911332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.924306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.924325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.924331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.937099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.937117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.937123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.949460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.949478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.949485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.962341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.962360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.436 [2024-11-05 04:39:51.962366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.436 [2024-11-05 04:39:51.975078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.436 [2024-11-05 04:39:51.975097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.437 [2024-11-05 04:39:51.975103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.437 [2024-11-05 04:39:51.988235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.437 [2024-11-05 04:39:51.988253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.437 [2024-11-05 04:39:51.988260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.437 [2024-11-05 04:39:52.000895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.437 [2024-11-05 04:39:52.000913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.437 [2024-11-05 04:39:52.000919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.437 [2024-11-05 04:39:52.013986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.437 [2024-11-05 04:39:52.014005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.437 [2024-11-05 04:39:52.014011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.437 [2024-11-05 04:39:52.026894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.437 [2024-11-05 04:39:52.026912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.437 [2024-11-05 04:39:52.026919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.437 [2024-11-05 04:39:52.038986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.437 [2024-11-05 04:39:52.039004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.437 [2024-11-05 04:39:52.039010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.437 [2024-11-05 04:39:52.047913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.437 [2024-11-05 04:39:52.047931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.437 [2024-11-05 04:39:52.047941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.437 [2024-11-05 04:39:52.059653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.437 [2024-11-05 04:39:52.059672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.437 [2024-11-05 04:39:52.059678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.437 [2024-11-05 04:39:52.068534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.437 [2024-11-05 04:39:52.068553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.437 [2024-11-05 04:39:52.068559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.697 [2024-11-05 04:39:52.079759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.697 [2024-11-05 04:39:52.079777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-05 04:39:52.079783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.697 [2024-11-05 04:39:52.088852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.697 [2024-11-05 04:39:52.088870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-05 04:39:52.088877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.697 [2024-11-05 04:39:52.099440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.697 [2024-11-05 04:39:52.099458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-05 04:39:52.099465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.697 [2024-11-05 04:39:52.109282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.697 [2024-11-05 04:39:52.109301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-05 04:39:52.109307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.697 [2024-11-05 04:39:52.119872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.697 [2024-11-05 04:39:52.119891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-05 04:39:52.119897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.697 [2024-11-05 04:39:52.130325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.697 [2024-11-05 04:39:52.130343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-05 04:39:52.130350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.697 [2024-11-05 04:39:52.142366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.697 [2024-11-05 04:39:52.142388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-05 04:39:52.142394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.697 [2024-11-05 04:39:52.154988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.697 [2024-11-05 04:39:52.155007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.697 [2024-11-05 04:39:52.155013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.697 [2024-11-05 04:39:52.167984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.697 [2024-11-05 04:39:52.168003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.168009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.181205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.181223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.181229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.193335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.193353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.193359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.205009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.205027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.205033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.217535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.217553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.217559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.229789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.229807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.229813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.240344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.240362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.240368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.251169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.251187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.251193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.261332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.261350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.261357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.271610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.271628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.271634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.284143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.284162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.284168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.296161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.296179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.296186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.309291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.309309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.309315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.321429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.321447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.321453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.698 [2024-11-05 04:39:52.334071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.698 [2024-11-05 04:39:52.334089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.698 [2024-11-05 04:39:52.334095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.958 [2024-11-05 04:39:52.344899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.958 [2024-11-05 04:39:52.344917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-05 04:39:52.344927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.958 [2024-11-05 04:39:52.355200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.958 [2024-11-05 04:39:52.355218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-05 04:39:52.355224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.958 [2024-11-05 04:39:52.365773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.958 [2024-11-05 04:39:52.365791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-05 04:39:52.365797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.958 [2024-11-05 04:39:52.376864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.958 [2024-11-05 04:39:52.376882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-05 04:39:52.376888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.958 [2024-11-05 04:39:52.388177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.958 [2024-11-05 04:39:52.388195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-05 04:39:52.388202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.958 [2024-11-05 04:39:52.399046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.958 [2024-11-05 04:39:52.399064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.958 [2024-11-05 04:39:52.399070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.958 [2024-11-05 04:39:52.410239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.958 [2024-11-05 04:39:52.410257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-05 04:39:52.410263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.959 [2024-11-05 04:39:52.422443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.959 [2024-11-05 04:39:52.422460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-05 04:39:52.422467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:38.959 [2024-11-05 04:39:52.433093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.959 [2024-11-05 04:39:52.433111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-05 04:39:52.433118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:38.959 [2024-11-05 04:39:52.443127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.959 [2024-11-05 04:39:52.443149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-05 04:39:52.443155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:38.959 3254.50 IOPS, 406.81 MiB/s [2024-11-05T03:39:52.599Z] [2024-11-05 04:39:52.454875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x59f840) 00:28:38.959 [2024-11-05 04:39:52.454894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:38.959 [2024-11-05 04:39:52.454900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:38.959 00:28:38.959 Latency(us) 00:28:38.959 [2024-11-05T03:39:52.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:38.959 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:38.959 nvme0n1 : 2.01 3251.56 406.44 0.00 0.00 4916.56 1146.88 13434.88 00:28:38.959 [2024-11-05T03:39:52.599Z] =================================================================================================================== 00:28:38.959 [2024-11-05T03:39:52.599Z] Total : 3251.56 406.44 0.00 0.00 4916.56 1146.88 13434.88 00:28:38.959 { 00:28:38.959 "results": [ 00:28:38.959 { 00:28:38.959 "job": "nvme0n1", 00:28:38.959 "core_mask": "0x2", 00:28:38.959 "workload": "randread", 00:28:38.959 "status": "finished", 00:28:38.959 "queue_depth": 16, 00:28:38.959 "io_size": 131072, 00:28:38.959 "runtime": 2.00673, 00:28:38.959 "iops": 3251.55850562856, 00:28:38.959 "mibps": 406.44481320357, 00:28:38.959 "io_failed": 0, 00:28:38.959 "io_timeout": 0, 00:28:38.959 "avg_latency_us": 4916.558532822477, 00:28:38.959 "min_latency_us": 1146.88, 00:28:38.959 "max_latency_us": 13434.88 00:28:38.959 } 00:28:38.959 ], 00:28:38.959 "core_count": 1 00:28:38.959 } 00:28:38.959 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:38.959 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:38.959 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:38.959 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:38.959 | .driver_specific 00:28:38.959 | .nvme_error 00:28:38.959 | .status_code 00:28:38.959 | .command_transient_transport_error' 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3164138 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3164138 ']' 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3164138 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3164138 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3164138' 00:28:39.219 killing process with pid 3164138 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3164138 00:28:39.219 Received shutdown signal, test time was about 2.000000 seconds 00:28:39.219 00:28:39.219 Latency(us) 00:28:39.219 [2024-11-05T03:39:52.859Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:39.219 [2024-11-05T03:39:52.859Z] =================================================================================================================== 00:28:39.219 [2024-11-05T03:39:52.859Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3164138 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3164824 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3164824 /var/tmp/bperf.sock 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3164824 ']' 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:39.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:39.219 04:39:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:39.479 [2024-11-05 04:39:52.896140] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:39.479 [2024-11-05 04:39:52.896197] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3164824 ] 00:28:39.479 [2024-11-05 04:39:52.980450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:39.479 [2024-11-05 04:39:53.009807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:40.048 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:40.048 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:40.048 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.048 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:40.308 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:40.308 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.308 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.308 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.308 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.308 04:39:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:40.567 nvme0n1 00:28:40.567 04:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:40.567 04:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.568 04:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:40.568 04:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.568 04:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:40.568 04:39:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:40.568 Running I/O for 2 seconds... 00:28:40.568 [2024-11-05 04:39:54.203337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166eb760 00:28:40.568 [2024-11-05 04:39:54.205113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.568 [2024-11-05 04:39:54.205139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:28:40.828 [2024-11-05 04:39:54.215278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f81e0 00:28:40.828 [2024-11-05 04:39:54.217026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.828 [2024-11-05 04:39:54.217043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.828 [2024-11-05 04:39:54.225697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ea680 00:28:40.828 [2024-11-05 04:39:54.226808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:22010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.828 [2024-11-05 04:39:54.226824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.828 [2024-11-05 04:39:54.237787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e95a0 00:28:40.828 [2024-11-05 04:39:54.239088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.828 [2024-11-05 04:39:54.239104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.828 [2024-11-05 04:39:54.251516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:40.828 [2024-11-05 04:39:54.253265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.828 [2024-11-05 04:39:54.253280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.828 [2024-11-05 04:39:54.261952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ee5c8 00:28:40.828 [2024-11-05 04:39:54.263047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.828 [2024-11-05 04:39:54.263062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.828 [2024-11-05 04:39:54.273137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f7970 00:28:40.828 [2024-11-05 04:39:54.274224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.828 [2024-11-05 04:39:54.274240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.828 [2024-11-05 04:39:54.285900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f8a50 00:28:40.828 [2024-11-05 04:39:54.286949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.828 [2024-11-05 04:39:54.286965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.828 [2024-11-05 04:39:54.297885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ea680 00:28:40.828 [2024-11-05 04:39:54.298975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:18626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.828 [2024-11-05 04:39:54.298990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.828 [2024-11-05 04:39:54.309800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e95a0 00:28:40.828 [2024-11-05 04:39:54.310907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.828 [2024-11-05 04:39:54.310922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.321691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fda78 00:28:40.829 [2024-11-05 04:39:54.322770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.322785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.333650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:40.829 [2024-11-05 04:39:54.334728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.334744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.347137] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fb048 00:28:40.829 [2024-11-05 04:39:54.348876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.348891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.356781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f6890 00:28:40.829 [2024-11-05 04:39:54.357855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:6672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.357871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.369495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e95a0 00:28:40.829 [2024-11-05 04:39:54.370614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.370629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.381444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e84c0 00:28:40.829 [2024-11-05 04:39:54.382539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.382554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.393368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fb048 00:28:40.829 [2024-11-05 04:39:54.394450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.394465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.406881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f9f68 00:28:40.829 [2024-11-05 04:39:54.408615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.408631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.416422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:40.829 [2024-11-05 04:39:54.417495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.417511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.429084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:40.829 [2024-11-05 04:39:54.430162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.430177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.440997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:40.829 [2024-11-05 04:39:54.442086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.442101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.452904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:40.829 [2024-11-05 04:39:54.453982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.453997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:40.829 [2024-11-05 04:39:54.464817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:40.829 [2024-11-05 04:39:54.465888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:40.829 [2024-11-05 04:39:54.465903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.476715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:41.090 [2024-11-05 04:39:54.477799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.477818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.488680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:41.090 [2024-11-05 04:39:54.489772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:22765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.489789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.500593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:41.090 [2024-11-05 04:39:54.501670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.501687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.512471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e7818 00:28:41.090 [2024-11-05 04:39:54.513566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.513582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.523627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fac10 00:28:41.090 [2024-11-05 04:39:54.524674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.524690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.536328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ec840 00:28:41.090 [2024-11-05 04:39:54.537369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.537384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.549807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:41.090 [2024-11-05 04:39:54.551517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.551532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.560154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e6fa8 00:28:41.090 [2024-11-05 04:39:54.561184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:4488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.561199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.573559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fa7d8 00:28:41.090 [2024-11-05 04:39:54.575266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.575281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.585409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ed0b0 00:28:41.090 [2024-11-05 04:39:54.587076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:3532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.587091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.595741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ed920 00:28:41.090 [2024-11-05 04:39:54.596803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.596819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.607622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fe720 00:28:41.090 [2024-11-05 04:39:54.608661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:25300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.608676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.621078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e6fa8 00:28:41.090 [2024-11-05 04:39:54.622756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.090 [2024-11-05 04:39:54.622771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:28:41.090 [2024-11-05 04:39:54.631420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e3060 00:28:41.090 [2024-11-05 04:39:54.632458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.091 [2024-11-05 04:39:54.632475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:41.091 [2024-11-05 04:39:54.644821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e3060 00:28:41.091 [2024-11-05 04:39:54.646478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.091 [2024-11-05 04:39:54.646494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:41.091 [2024-11-05 04:39:54.654429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:41.091 [2024-11-05 04:39:54.655435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:12678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.091 [2024-11-05 04:39:54.655450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:41.091 [2024-11-05 04:39:54.667147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166eea00 00:28:41.091 [2024-11-05 04:39:54.668172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.091 [2024-11-05 04:39:54.668188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.091 [2024-11-05 04:39:54.680610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e3060 00:28:41.091 [2024-11-05 04:39:54.682272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:23448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.091 [2024-11-05 04:39:54.682287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:41.091 [2024-11-05 04:39:54.691013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e5ec8 00:28:41.091 [2024-11-05 04:39:54.692058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.091 [2024-11-05 04:39:54.692074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.091 [2024-11-05 04:39:54.702921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e4578 00:28:41.091 [2024-11-05 04:39:54.703936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:8352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.091 [2024-11-05 04:39:54.703952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.091 [2024-11-05 04:39:54.714828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f57b0 00:28:41.091 [2024-11-05 04:39:54.715815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.091 [2024-11-05 04:39:54.715831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.091 [2024-11-05 04:39:54.726706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fe720 00:28:41.091 [2024-11-05 04:39:54.727711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.091 [2024-11-05 04:39:54.727726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.740198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fa7d8 00:28:41.352 [2024-11-05 04:39:54.741869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.741885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.750545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ebfd0 00:28:41.352 [2024-11-05 04:39:54.751560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.751575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.762420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ebfd0 00:28:41.352 [2024-11-05 04:39:54.763430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.763446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.774331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ebfd0 00:28:41.352 [2024-11-05 04:39:54.775332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.775348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.786225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ebfd0 00:28:41.352 [2024-11-05 04:39:54.787233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.787252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.798127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ebfd0 00:28:41.352 [2024-11-05 04:39:54.799144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.799159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.810003] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fb048 00:28:41.352 [2024-11-05 04:39:54.811016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:18406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.811032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.821899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f57b0 00:28:41.352 [2024-11-05 04:39:54.822904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.822919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.833001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fe720 00:28:41.352 [2024-11-05 04:39:54.833981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.833996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.845682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fe720 00:28:41.352 [2024-11-05 04:39:54.846672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.846688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.857573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fe720 00:28:41.352 [2024-11-05 04:39:54.858568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.352 [2024-11-05 04:39:54.858584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:41.352 [2024-11-05 04:39:54.868667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f57b0 00:28:41.352 [2024-11-05 04:39:54.869644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.869659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.881370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f3a28 00:28:41.353 [2024-11-05 04:39:54.882367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.882383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.894816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ed920 00:28:41.353 [2024-11-05 04:39:54.896452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.896468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.905174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fe720 00:28:41.353 [2024-11-05 04:39:54.906175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:6070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.906192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.917095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fe720 00:28:41.353 [2024-11-05 04:39:54.918040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.918056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.928998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e38d0 00:28:41.353 [2024-11-05 04:39:54.929968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.929984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.940935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e5220 00:28:41.353 [2024-11-05 04:39:54.941910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.941926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.952851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f31b8 00:28:41.353 [2024-11-05 04:39:54.953815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:10439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.953830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.966308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ef6a8 00:28:41.353 [2024-11-05 04:39:54.967923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.967938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.976713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:41.353 [2024-11-05 04:39:54.977676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.977691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.353 [2024-11-05 04:39:54.987857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e5220 00:28:41.353 [2024-11-05 04:39:54.988804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.353 [2024-11-05 04:39:54.988819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:41.614 [2024-11-05 04:39:55.000583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f4298 00:28:41.614 [2024-11-05 04:39:55.001550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.614 [2024-11-05 04:39:55.001567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.614 [2024-11-05 04:39:55.011737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1868 00:28:41.614 [2024-11-05 04:39:55.012706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.614 [2024-11-05 04:39:55.012721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:28:41.614 [2024-11-05 04:39:55.024427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1868 00:28:41.614 [2024-11-05 04:39:55.025410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.614 [2024-11-05 04:39:55.025425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.614 [2024-11-05 04:39:55.035550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f31b8 00:28:41.614 [2024-11-05 04:39:55.036506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.614 [2024-11-05 04:39:55.036522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:41.614 [2024-11-05 04:39:55.048280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e5220 00:28:41.614 [2024-11-05 04:39:55.049262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:17532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.614 [2024-11-05 04:39:55.049278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:41.614 [2024-11-05 04:39:55.061818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:41.614 [2024-11-05 04:39:55.063432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.614 [2024-11-05 04:39:55.063447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:41.614 [2024-11-05 04:39:55.072175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1868 00:28:41.614 [2024-11-05 04:39:55.073136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22466 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.614 [2024-11-05 04:39:55.073152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:41.614 [2024-11-05 04:39:55.084078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1868 00:28:41.614 [2024-11-05 04:39:55.085044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.614 [2024-11-05 04:39:55.085059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:41.614 [2024-11-05 04:39:55.095179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f2d80 00:28:41.614 [2024-11-05 04:39:55.096124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.096139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.107914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e4140 00:28:41.615 [2024-11-05 04:39:55.108885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.108900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.121293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f2d80 00:28:41.615 [2024-11-05 04:39:55.122876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.122891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.131636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f2510 00:28:41.615 [2024-11-05 04:39:55.132535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.132551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.143537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1868 00:28:41.615 [2024-11-05 04:39:55.144453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:9238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.144469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.155484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fd640 00:28:41.615 [2024-11-05 04:39:55.156421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.156437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.167383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f4f40 00:28:41.615 [2024-11-05 04:39:55.168310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.168326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.179317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f4f40 00:28:41.615 [2024-11-05 04:39:55.180239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.180255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:41.615 21235.00 IOPS, 82.95 MiB/s [2024-11-05T03:39:55.255Z] [2024-11-05 04:39:55.191223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166dfdc0 00:28:41.615 [2024-11-05 04:39:55.192148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.192165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.204697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fd640 00:28:41.615 [2024-11-05 04:39:55.206270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.206285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.215116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1868 00:28:41.615 [2024-11-05 04:39:55.216039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.216054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.228578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f2510 00:28:41.615 [2024-11-05 04:39:55.230125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.230141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:41.615 [2024-11-05 04:39:55.239264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166dfdc0 00:28:41.615 [2024-11-05 04:39:55.240189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.615 [2024-11-05 04:39:55.240204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.252806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fd640 00:28:41.876 [2024-11-05 04:39:55.254372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.254387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.262445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f0788 00:28:41.876 [2024-11-05 04:39:55.263349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.263365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.275189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166dfdc0 00:28:41.876 [2024-11-05 04:39:55.276122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.276138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.287110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fd640 00:28:41.876 [2024-11-05 04:39:55.287989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.288004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.298974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166dfdc0 00:28:41.876 [2024-11-05 04:39:55.299867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:8979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.299883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.310913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e49b0 00:28:41.876 [2024-11-05 04:39:55.311767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.311783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.322788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fd640 00:28:41.876 [2024-11-05 04:39:55.323670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.323686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.336281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fef90 00:28:41.876 [2024-11-05 04:39:55.337779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:11433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.337795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.346611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166feb58 00:28:41.876 [2024-11-05 04:39:55.347466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.347481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.360099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e49b0 00:28:41.876 [2024-11-05 04:39:55.361631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.361646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.370538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166dfdc0 00:28:41.876 [2024-11-05 04:39:55.371426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.371442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.382467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f4b08 00:28:41.876 [2024-11-05 04:39:55.383340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.383356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.395977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166eff18 00:28:41.876 [2024-11-05 04:39:55.397509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:6268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.397525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.406329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fef90 00:28:41.876 [2024-11-05 04:39:55.407190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.407206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.418252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fef90 00:28:41.876 [2024-11-05 04:39:55.419132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:6219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.419148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:41.876 [2024-11-05 04:39:55.430171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fef90 00:28:41.876 [2024-11-05 04:39:55.431052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.876 [2024-11-05 04:39:55.431067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:41.877 [2024-11-05 04:39:55.442085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fef90 00:28:41.877 [2024-11-05 04:39:55.442955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.877 [2024-11-05 04:39:55.442971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:41.877 [2024-11-05 04:39:55.453997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fef90 00:28:41.877 [2024-11-05 04:39:55.454876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.877 [2024-11-05 04:39:55.454891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:41.877 [2024-11-05 04:39:55.465874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1430 00:28:41.877 [2024-11-05 04:39:55.466720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.877 [2024-11-05 04:39:55.466736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:41.877 [2024-11-05 04:39:55.479344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166dfdc0 00:28:41.877 [2024-11-05 04:39:55.480855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:23514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.877 [2024-11-05 04:39:55.480870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:41.877 [2024-11-05 04:39:55.488953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1ca0 00:28:41.877 [2024-11-05 04:39:55.489809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.877 [2024-11-05 04:39:55.489825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:28:41.877 [2024-11-05 04:39:55.503213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fe2e8 00:28:41.877 [2024-11-05 04:39:55.504735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.877 [2024-11-05 04:39:55.504753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:28:41.877 [2024-11-05 04:39:55.512780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fb8b8 00:28:41.877 [2024-11-05 04:39:55.513628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:10483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:41.877 [2024-11-05 04:39:55.513646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.525494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fb8b8 00:28:42.138 [2024-11-05 04:39:55.526369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.526385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.538985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:42.138 [2024-11-05 04:39:55.540471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.540487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.548595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1ca0 00:28:42.138 [2024-11-05 04:39:55.549446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.549462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.561272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1ca0 00:28:42.138 [2024-11-05 04:39:55.562149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.562164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.573178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1ca0 00:28:42.138 [2024-11-05 04:39:55.574055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.574071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.585106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1ca0 00:28:42.138 [2024-11-05 04:39:55.585982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.585998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.597011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1ca0 00:28:42.138 [2024-11-05 04:39:55.597889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.597904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.608927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1ca0 00:28:42.138 [2024-11-05 04:39:55.609786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.609802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.620835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1ca0 00:28:42.138 [2024-11-05 04:39:55.621702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.621717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.635114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fbcf0 00:28:42.138 [2024-11-05 04:39:55.636634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.636650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.646475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ecc78 00:28:42.138 [2024-11-05 04:39:55.647948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.647963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.656456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e4578 00:28:42.138 [2024-11-05 04:39:55.657463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.657478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.669113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e4578 00:28:42.138 [2024-11-05 04:39:55.670132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:13075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.670148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.682531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e4578 00:28:42.138 [2024-11-05 04:39:55.684186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.684201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.692907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f7970 00:28:42.138 [2024-11-05 04:39:55.693915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.138 [2024-11-05 04:39:55.693931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:42.138 [2024-11-05 04:39:55.704847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f7970 00:28:42.139 [2024-11-05 04:39:55.705812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.139 [2024-11-05 04:39:55.705827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:42.139 [2024-11-05 04:39:55.716753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fc998 00:28:42.139 [2024-11-05 04:39:55.717696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.139 [2024-11-05 04:39:55.717711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:42.139 [2024-11-05 04:39:55.728690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e7c50 00:28:42.139 [2024-11-05 04:39:55.729673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.139 [2024-11-05 04:39:55.729689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:42.139 [2024-11-05 04:39:55.740636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ebfd0 00:28:42.139 [2024-11-05 04:39:55.741644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.139 [2024-11-05 04:39:55.741660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:42.139 [2024-11-05 04:39:55.754112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166dfdc0 00:28:42.139 [2024-11-05 04:39:55.755757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.139 [2024-11-05 04:39:55.755772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:28:42.139 [2024-11-05 04:39:55.763727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e7c50 00:28:42.139 [2024-11-05 04:39:55.764713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.139 [2024-11-05 04:39:55.764728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:28:42.399 [2024-11-05 04:39:55.776409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e7c50 00:28:42.399 [2024-11-05 04:39:55.777403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:10508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.399 [2024-11-05 04:39:55.777419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:42.399 [2024-11-05 04:39:55.788306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e7c50 00:28:42.399 [2024-11-05 04:39:55.789299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.399 [2024-11-05 04:39:55.789314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:42.399 [2024-11-05 04:39:55.801711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e7c50 00:28:42.400 [2024-11-05 04:39:55.803347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.803362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.812066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ebb98 00:28:42.400 [2024-11-05 04:39:55.813052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7367 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.813067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.825455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ebb98 00:28:42.400 [2024-11-05 04:39:55.827084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.827102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.835834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166df118 00:28:42.400 [2024-11-05 04:39:55.836818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.836834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.847733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166df118 00:28:42.400 [2024-11-05 04:39:55.848699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.848714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.859623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166eb328 00:28:42.400 [2024-11-05 04:39:55.860594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.860609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.870758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:42.400 [2024-11-05 04:39:55.871711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.871725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.883417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:42.400 [2024-11-05 04:39:55.884376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.884391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.895328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:42.400 [2024-11-05 04:39:55.896295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.896311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.907214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:42.400 [2024-11-05 04:39:55.908176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.908192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.919118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:42.400 [2024-11-05 04:39:55.920075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.920091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.931000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fdeb0 00:28:42.400 [2024-11-05 04:39:55.931938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.931953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.942891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166eaab8 00:28:42.400 [2024-11-05 04:39:55.943815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.943830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.954809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166fc998 00:28:42.400 [2024-11-05 04:39:55.955775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.955790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.966765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f2948 00:28:42.400 [2024-11-05 04:39:55.967723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.967739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.977899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f6458 00:28:42.400 [2024-11-05 04:39:55.978809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.978824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:55.990552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f6458 00:28:42.400 [2024-11-05 04:39:55.991503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:55.991519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:56.002435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f6458 00:28:42.400 [2024-11-05 04:39:56.003388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:10396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:56.003404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:56.015857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f6458 00:28:42.400 [2024-11-05 04:39:56.017437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:56.017452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:42.400 [2024-11-05 04:39:56.025450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166eaab8 00:28:42.400 [2024-11-05 04:39:56.026341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.400 [2024-11-05 04:39:56.026356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.038170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ebb98 00:28:42.661 [2024-11-05 04:39:56.039068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:25227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.039084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.050098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166f1430 00:28:42.661 [2024-11-05 04:39:56.051053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.051069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.061995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e1b48 00:28:42.661 [2024-11-05 04:39:56.062934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.062949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.073071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e2c28 00:28:42.661 [2024-11-05 04:39:56.073977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.073992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.085820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e2c28 00:28:42.661 [2024-11-05 04:39:56.086751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.086767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.097719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e2c28 00:28:42.661 [2024-11-05 04:39:56.098653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10011 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.098669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.109607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e2c28 00:28:42.661 [2024-11-05 04:39:56.110534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.110549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.121494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e2c28 00:28:42.661 [2024-11-05 04:39:56.122425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.122440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.133393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e2c28 00:28:42.661 [2024-11-05 04:39:56.134323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.134341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.145290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e2c28 00:28:42.661 [2024-11-05 04:39:56.146219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.146234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.157183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e2c28 00:28:42.661 [2024-11-05 04:39:56.158115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:15300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.158130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.170589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166e2c28 00:28:42.661 [2024-11-05 04:39:56.172162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:10634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.172178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:28:42.661 [2024-11-05 04:39:56.180935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166ea680 00:28:42.661 [2024-11-05 04:39:56.181860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.181876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:28:42.661 21336.00 IOPS, 83.34 MiB/s [2024-11-05T03:39:56.301Z] [2024-11-05 04:39:56.192788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e520) with pdu=0x2000166de8a8 00:28:42.661 [2024-11-05 04:39:56.193699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:42.661 [2024-11-05 04:39:56.193714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:28:42.661 00:28:42.661 Latency(us) 00:28:42.661 [2024-11-05T03:39:56.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.661 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:42.661 nvme0n1 : 2.01 21356.52 83.42 0.00 0.00 5984.89 2266.45 13817.17 00:28:42.661 [2024-11-05T03:39:56.301Z] =================================================================================================================== 00:28:42.661 [2024-11-05T03:39:56.301Z] Total : 21356.52 83.42 0.00 0.00 5984.89 2266.45 13817.17 00:28:42.661 { 00:28:42.661 "results": [ 00:28:42.661 { 00:28:42.661 "job": "nvme0n1", 00:28:42.661 "core_mask": "0x2", 00:28:42.661 "workload": "randwrite", 00:28:42.661 "status": "finished", 00:28:42.661 "queue_depth": 128, 00:28:42.661 "io_size": 4096, 00:28:42.661 "runtime": 2.006273, 00:28:42.661 "iops": 21356.515289793562, 00:28:42.661 "mibps": 83.4238878507561, 00:28:42.661 "io_failed": 0, 00:28:42.661 "io_timeout": 0, 00:28:42.661 "avg_latency_us": 5984.894549132184, 00:28:42.661 "min_latency_us": 2266.4533333333334, 00:28:42.661 "max_latency_us": 13817.173333333334 00:28:42.661 } 00:28:42.661 ], 00:28:42.661 "core_count": 1 00:28:42.661 } 00:28:42.661 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:42.661 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:42.661 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:42.661 | .driver_specific 00:28:42.661 | .nvme_error 00:28:42.661 | .status_code 00:28:42.661 | .command_transient_transport_error' 00:28:42.661 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 168 > 0 )) 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3164824 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3164824 ']' 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3164824 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3164824 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3164824' 00:28:42.922 killing process with pid 3164824 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3164824 00:28:42.922 Received shutdown signal, test time was about 2.000000 seconds 00:28:42.922 00:28:42.922 Latency(us) 00:28:42.922 [2024-11-05T03:39:56.562Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:42.922 [2024-11-05T03:39:56.562Z] =================================================================================================================== 00:28:42.922 [2024-11-05T03:39:56.562Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3164824 00:28:42.922 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3165507 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3165507 /var/tmp/bperf.sock 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 3165507 ']' 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:43.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:43.182 04:39:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:43.182 [2024-11-05 04:39:56.623708] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:43.183 [2024-11-05 04:39:56.623772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3165507 ] 00:28:43.183 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:43.183 Zero copy mechanism will not be used. 00:28:43.183 [2024-11-05 04:39:56.708510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.183 [2024-11-05 04:39:56.738062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.124 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:44.384 nvme0n1 00:28:44.385 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:44.385 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:44.385 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:44.385 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:44.385 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:44.385 04:39:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:44.385 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:44.385 Zero copy mechanism will not be used. 00:28:44.385 Running I/O for 2 seconds... 00:28:44.385 [2024-11-05 04:39:57.975788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.385 [2024-11-05 04:39:57.976125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.385 [2024-11-05 04:39:57.976153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.385 [2024-11-05 04:39:57.982818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.385 [2024-11-05 04:39:57.983150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.385 [2024-11-05 04:39:57.983176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.385 [2024-11-05 04:39:57.988550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.385 [2024-11-05 04:39:57.988932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.385 [2024-11-05 04:39:57.988951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.385 [2024-11-05 04:39:57.998285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.385 [2024-11-05 04:39:57.998612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.385 [2024-11-05 04:39:57.998629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.385 [2024-11-05 04:39:58.006126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.385 [2024-11-05 04:39:58.006443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.385 [2024-11-05 04:39:58.006461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.385 [2024-11-05 04:39:58.012456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.385 [2024-11-05 04:39:58.012659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.385 [2024-11-05 04:39:58.012676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.385 [2024-11-05 04:39:58.019332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.385 [2024-11-05 04:39:58.019538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.385 [2024-11-05 04:39:58.019555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.027109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.027433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.027450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.035067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.035269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.035286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.041180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.041383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.041399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.049113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.049316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.049333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.058181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.058448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.058464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.063791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.063994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.064011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.070652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.070993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.071011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.078158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.078483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.078500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.084954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.085277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.085294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.092763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.093100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.093117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.100178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.100379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.646 [2024-11-05 04:39:58.100395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.646 [2024-11-05 04:39:58.109192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.646 [2024-11-05 04:39:58.109515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.109532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.115985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.116316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.116336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.122152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.122489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.122506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.128237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.128439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.128455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.135864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.136180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.136198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.142960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.143161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.143178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.150058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.150383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.150400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.156316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.156631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.156648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.162488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.162815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.162833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.169968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.170281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.170298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.177240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.177493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.177510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.184405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.184714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.184731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.190272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.190472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.190489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.196810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.197013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.197029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.203715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.203938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.203954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.209077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.209277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.209294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.216655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.216870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.216887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.225171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.225480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.225498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.231348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.231554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.231571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.240642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.240904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.240920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.249034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.249347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.249364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.257655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.258005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.258023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.266808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.266892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.266908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.275888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.276085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.276101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.647 [2024-11-05 04:39:58.282846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.647 [2024-11-05 04:39:58.283175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.647 [2024-11-05 04:39:58.283193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.289152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.289341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.289357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.297455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.297796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.297813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.306379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.306626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.306645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.314697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.315002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.315020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.323904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.324201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.324219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.334955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.335160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.335177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.346147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.346341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.346357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.357167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.357384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.357400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.368510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.368732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.368753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.379330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.379563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.379579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.388516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.388722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.388739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.396567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.396771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.396788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.405867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.406160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.406177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.415238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.415454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.415470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.422836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.423161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.423179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.431132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.431504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.431522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.437653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.909 [2024-11-05 04:39:58.437848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.909 [2024-11-05 04:39:58.437864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.909 [2024-11-05 04:39:58.445118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.445401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.445418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.453683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.453877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.453894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.460692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.460954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.460970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.468575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.468885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.468902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.475672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.475874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.475890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.484433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.484766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.484784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.495804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.496137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.496155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.505939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.506178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.506201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.516087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.516526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.516544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.522661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.522854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.522871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.528973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.529153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.529170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.536051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.536235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.536254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.539805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.539975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.539991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:44.910 [2024-11-05 04:39:58.543913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:44.910 [2024-11-05 04:39:58.544086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:44.910 [2024-11-05 04:39:58.544102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.171 [2024-11-05 04:39:58.547858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.171 [2024-11-05 04:39:58.548029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.171 [2024-11-05 04:39:58.548045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.171 [2024-11-05 04:39:58.551936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.171 [2024-11-05 04:39:58.552107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.171 [2024-11-05 04:39:58.552124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.171 [2024-11-05 04:39:58.555867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.171 [2024-11-05 04:39:58.556040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.171 [2024-11-05 04:39:58.556056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.171 [2024-11-05 04:39:58.559812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.171 [2024-11-05 04:39:58.559990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.171 [2024-11-05 04:39:58.560006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.563823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.563998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.564015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.567509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.567679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.567696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.571665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.571846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.571863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.575682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.575863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.575880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.581711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.581897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.581914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.586099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.586273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.586289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.592513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.592923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.592941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.602360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.602695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.602712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.612490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.612783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.612800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.623657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.623921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.623938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.634292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.634610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.634630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.644321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.644657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.644674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.654159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.654428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.654446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.665132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.665481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.665498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.675754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.676074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.676091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.685816] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.686105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.686122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.696528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.696737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.696759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.706938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.707124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.707140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.716645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.716892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.716908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.726906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.727168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.727186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.736845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.737038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.737054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.742282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.742452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.742469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.749950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.750203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.750221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.757411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.757718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.757735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.764327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.764610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.764627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.772502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.772675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.172 [2024-11-05 04:39:58.772692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.172 [2024-11-05 04:39:58.780598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.172 [2024-11-05 04:39:58.780784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.173 [2024-11-05 04:39:58.780800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.173 [2024-11-05 04:39:58.785880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.173 [2024-11-05 04:39:58.786048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.173 [2024-11-05 04:39:58.786065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.173 [2024-11-05 04:39:58.793505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.173 [2024-11-05 04:39:58.793721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.173 [2024-11-05 04:39:58.793737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.173 [2024-11-05 04:39:58.801386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.173 [2024-11-05 04:39:58.801647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.173 [2024-11-05 04:39:58.801664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.809302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.809631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.809649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.815086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.815254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.815271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.821379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.821674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.821691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.829470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.829760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.829777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.837116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.837461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.837478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.843287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.843622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.843638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.850505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.850676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.850695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.855861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.856033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.856049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.862344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.862537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.862554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.868957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.869146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.869162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.873950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.874276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.874293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.882205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.882373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.882389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.440 [2024-11-05 04:39:58.890658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.440 [2024-11-05 04:39:58.890839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.440 [2024-11-05 04:39:58.890855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.899778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.899950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.899966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.908032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.908208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.908224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.914957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.915243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.915260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.922568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.922868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.922885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.929960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.930261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.930279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.935542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.935716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.935732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.939078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.939290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.939307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.942743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.942913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.942929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.947428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.947693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.947711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.953570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.953734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.953756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.960765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.961082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.961099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.441 4105.00 IOPS, 513.12 MiB/s [2024-11-05T03:39:59.081Z] [2024-11-05 04:39:58.967428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.967506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.967521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.972322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.972380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.972395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.976584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.976659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.976674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.980801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.980870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.980884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.985100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.985154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.985169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.993544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.993619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.993635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:58.997883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:58.997963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:58.997979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:59.001845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:59.001904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:59.001920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:59.007723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:59.007815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:59.007833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:59.014904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:59.014963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:59.014979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:59.018893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:59.018950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:59.018965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:59.024526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:59.024595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:59.024610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:59.029604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:59.029673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:59.029688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:59.034868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:59.034926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:59.034942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:59.041496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:59.041564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:59.041580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.441 [2024-11-05 04:39:59.046404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.441 [2024-11-05 04:39:59.046523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.441 [2024-11-05 04:39:59.046538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.442 [2024-11-05 04:39:59.051798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.442 [2024-11-05 04:39:59.051917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.442 [2024-11-05 04:39:59.051933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.442 [2024-11-05 04:39:59.058771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.442 [2024-11-05 04:39:59.058833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.442 [2024-11-05 04:39:59.058849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.442 [2024-11-05 04:39:59.063100] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.442 [2024-11-05 04:39:59.063157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.442 [2024-11-05 04:39:59.063173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.442 [2024-11-05 04:39:59.068813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.442 [2024-11-05 04:39:59.068870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.442 [2024-11-05 04:39:59.068885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.737 [2024-11-05 04:39:59.073641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.737 [2024-11-05 04:39:59.073698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.737 [2024-11-05 04:39:59.073714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.737 [2024-11-05 04:39:59.077552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.737 [2024-11-05 04:39:59.077607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.737 [2024-11-05 04:39:59.077623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.737 [2024-11-05 04:39:59.083232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.083304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.083319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.090506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.090560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.090576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.095089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.095144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.095159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.099731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.099819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.099835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.107305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.107526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.107542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.111769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.111834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.111850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.118052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.118262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.118278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.122795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.122865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.122880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.128495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.128561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.128577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.132908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.132997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.133012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.136917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.136984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.137000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.140957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.141044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.141060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.144890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.144948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.144970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.148541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.148597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.148613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.152412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.152477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.152493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.156084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.156147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.156162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.159781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.159843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.159858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.163601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.163658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.163674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.167255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.167311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.167326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.170914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.171003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.171018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.175598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.175850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.175866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.180835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.180894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.180910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.184522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.184587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.184602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.188636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.188704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.188720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.192311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.192368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.192383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.196094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.196146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.196162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.738 [2024-11-05 04:39:59.199775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.738 [2024-11-05 04:39:59.199833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.738 [2024-11-05 04:39:59.199849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.203441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.203495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.203511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.207065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.207121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.207136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.211151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.211215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.211230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.214849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.214943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.214959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.219598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.219659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.219675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.223451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.223503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.223518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.227089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.227144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.227159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.230683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.230756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.230772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.234297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.234356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.234371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.238120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.238178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.238193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.242202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.242266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.242281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.245825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.245881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.245901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.249460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.249514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.249530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.253101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.253154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.253170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.256958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.257038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.257054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.260710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.260784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.260799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.264439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.264491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.264506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.268072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.268124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.268139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.271986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.272038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.272054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.275744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.275811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.275827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.279425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.279476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.279492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.287975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.288045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.288060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.291607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.291667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.291682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.297292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.297513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.297528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.303452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.303685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.303701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.307883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.307940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.307956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.315792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.739 [2024-11-05 04:39:59.315876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.739 [2024-11-05 04:39:59.315891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.739 [2024-11-05 04:39:59.323056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.740 [2024-11-05 04:39:59.323117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.740 [2024-11-05 04:39:59.323133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.740 [2024-11-05 04:39:59.327331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.740 [2024-11-05 04:39:59.327401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.740 [2024-11-05 04:39:59.327418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.740 [2024-11-05 04:39:59.331692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.740 [2024-11-05 04:39:59.331809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.740 [2024-11-05 04:39:59.331825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:45.740 [2024-11-05 04:39:59.338688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.740 [2024-11-05 04:39:59.338803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.740 [2024-11-05 04:39:59.338818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:45.740 [2024-11-05 04:39:59.345808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.740 [2024-11-05 04:39:59.346098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.740 [2024-11-05 04:39:59.346121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:45.740 [2024-11-05 04:39:59.351794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.740 [2024-11-05 04:39:59.351846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.740 [2024-11-05 04:39:59.351862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:45.740 [2024-11-05 04:39:59.356190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:45.740 [2024-11-05 04:39:59.356279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:45.740 [2024-11-05 04:39:59.356295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.025 [2024-11-05 04:39:59.361025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.025 [2024-11-05 04:39:59.361078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.025 [2024-11-05 04:39:59.361094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.025 [2024-11-05 04:39:59.366629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.025 [2024-11-05 04:39:59.366698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.366714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.373686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.373758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.373774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.378494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.378561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.378577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.383351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.383413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.383428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.389516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.389726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.389742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.396083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.396168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.396184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.402004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.402060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.402075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.408244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.408508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.408524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.416841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.416944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.416959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.426753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.427016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.427031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.437551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.437646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.437662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.448976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.449274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.449291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.459216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.459445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.459461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.470662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.470910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.470926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.481914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.482238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.482254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.493044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.493355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.493372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.503863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.504170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.504185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.515103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.515404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.515420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.526151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.526443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.526459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.536898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.536966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.536984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.547642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.547815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.547831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.558154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.558464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.558482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.568523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.568823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.568840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.578889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.579150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.579166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.589248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.589489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.589505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.600115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.600251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.600266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.610664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.611091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.026 [2024-11-05 04:39:59.611108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.026 [2024-11-05 04:39:59.621846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.026 [2024-11-05 04:39:59.621907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.027 [2024-11-05 04:39:59.621923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.027 [2024-11-05 04:39:59.632176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.027 [2024-11-05 04:39:59.632487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.027 [2024-11-05 04:39:59.632503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.027 [2024-11-05 04:39:59.639070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.027 [2024-11-05 04:39:59.639245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.027 [2024-11-05 04:39:59.639261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.027 [2024-11-05 04:39:59.646534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.027 [2024-11-05 04:39:59.646630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.027 [2024-11-05 04:39:59.646646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.027 [2024-11-05 04:39:59.652244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.027 [2024-11-05 04:39:59.652307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.027 [2024-11-05 04:39:59.652323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.027 [2024-11-05 04:39:59.658109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.027 [2024-11-05 04:39:59.658162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.027 [2024-11-05 04:39:59.658177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.663657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.663736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.663759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.668467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.668532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.668547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.674900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.674961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.674977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.681792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.681874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.681890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.688500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.688576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.688591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.695509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.695570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.695585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.703863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.703939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.703954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.709593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.709662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.709677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.715933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.716243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.716259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.722495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.722550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.722565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.729840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.729943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.729959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.737029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.737316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.737333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.747219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.747531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.747551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.757721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.757919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.757935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.768438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.768649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.289 [2024-11-05 04:39:59.768664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.289 [2024-11-05 04:39:59.779322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.289 [2024-11-05 04:39:59.779602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.779618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.790276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.790344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.790360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.800417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.800636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.800652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.810818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.810963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.810979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.821455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.821690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.821706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.832914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.833233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.833249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.843864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.844154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.844170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.854355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.854572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.854588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.865256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.865561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.865577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.876289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.876537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.876553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.887181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.887462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.887477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.897499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.897791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.897806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.908634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.908948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.908964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.290 [2024-11-05 04:39:59.919622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.290 [2024-11-05 04:39:59.919938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.290 [2024-11-05 04:39:59.919955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.551 [2024-11-05 04:39:59.929760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.551 [2024-11-05 04:39:59.930055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.551 [2024-11-05 04:39:59.930072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:46.551 [2024-11-05 04:39:59.940617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.551 [2024-11-05 04:39:59.940688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.551 [2024-11-05 04:39:59.940704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:46.551 [2024-11-05 04:39:59.951385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.551 [2024-11-05 04:39:59.951697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.551 [2024-11-05 04:39:59.951714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:46.551 [2024-11-05 04:39:59.961429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xf8e860) with pdu=0x2000166fef90 00:28:46.551 [2024-11-05 04:39:59.961682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:46.551 [2024-11-05 04:39:59.961698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:46.551 4376.00 IOPS, 547.00 MiB/s 00:28:46.551 Latency(us) 00:28:46.551 [2024-11-05T03:40:00.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.551 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:46.551 nvme0n1 : 2.01 4371.42 546.43 0.00 0.00 3653.00 1658.88 12069.55 00:28:46.551 [2024-11-05T03:40:00.191Z] =================================================================================================================== 00:28:46.551 [2024-11-05T03:40:00.191Z] Total : 4371.42 546.43 0.00 0.00 3653.00 1658.88 12069.55 00:28:46.551 { 00:28:46.551 "results": [ 00:28:46.551 { 00:28:46.551 "job": "nvme0n1", 00:28:46.551 "core_mask": "0x2", 00:28:46.551 "workload": "randwrite", 00:28:46.551 "status": "finished", 00:28:46.551 "queue_depth": 16, 00:28:46.551 "io_size": 131072, 00:28:46.551 "runtime": 2.006442, 00:28:46.551 "iops": 4371.419657283888, 00:28:46.551 "mibps": 546.427457160486, 00:28:46.551 "io_failed": 0, 00:28:46.551 "io_timeout": 0, 00:28:46.551 "avg_latency_us": 3653.0049055599893, 00:28:46.551 "min_latency_us": 1658.88, 00:28:46.551 "max_latency_us": 12069.546666666667 00:28:46.551 } 00:28:46.551 ], 00:28:46.551 "core_count": 1 00:28:46.551 } 00:28:46.551 04:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:46.551 04:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:46.551 04:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:46.551 | .driver_specific 00:28:46.551 | .nvme_error 00:28:46.551 | .status_code 00:28:46.551 | .command_transient_transport_error' 00:28:46.551 04:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:46.551 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 282 > 0 )) 00:28:46.551 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3165507 00:28:46.551 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3165507 ']' 00:28:46.551 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3165507 00:28:46.551 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:46.551 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:46.551 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3165507 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3165507' 00:28:46.812 killing process with pid 3165507 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3165507 00:28:46.812 Received shutdown signal, test time was about 2.000000 seconds 00:28:46.812 00:28:46.812 Latency(us) 00:28:46.812 [2024-11-05T03:40:00.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.812 [2024-11-05T03:40:00.452Z] =================================================================================================================== 00:28:46.812 [2024-11-05T03:40:00.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3165507 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3163105 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 3163105 ']' 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 3163105 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3163105 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3163105' 00:28:46.812 killing process with pid 3163105 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 3163105 00:28:46.812 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 3163105 00:28:47.073 00:28:47.073 real 0m16.361s 00:28:47.073 user 0m32.499s 00:28:47.073 sys 0m3.490s 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:47.073 ************************************ 00:28:47.073 END TEST nvmf_digest_error 00:28:47.073 ************************************ 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.073 rmmod nvme_tcp 00:28:47.073 rmmod nvme_fabrics 00:28:47.073 rmmod nvme_keyring 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 3163105 ']' 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 3163105 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 3163105 ']' 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 3163105 00:28:47.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3163105) - No such process 00:28:47.073 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 3163105 is not found' 00:28:47.073 Process with pid 3163105 is not found 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:47.074 04:40:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:49.621 00:28:49.621 real 0m43.079s 00:28:49.621 user 1m7.995s 00:28:49.621 sys 0m12.673s 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:49.621 ************************************ 00:28:49.621 END TEST nvmf_digest 00:28:49.621 ************************************ 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.621 ************************************ 00:28:49.621 START TEST nvmf_bdevperf 00:28:49.621 ************************************ 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:49.621 * Looking for test storage... 00:28:49.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.621 04:40:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:49.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.621 --rc genhtml_branch_coverage=1 00:28:49.621 --rc genhtml_function_coverage=1 00:28:49.621 --rc genhtml_legend=1 00:28:49.621 --rc geninfo_all_blocks=1 00:28:49.621 --rc geninfo_unexecuted_blocks=1 00:28:49.621 00:28:49.621 ' 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:49.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.621 --rc genhtml_branch_coverage=1 00:28:49.621 --rc genhtml_function_coverage=1 00:28:49.621 --rc genhtml_legend=1 00:28:49.621 --rc geninfo_all_blocks=1 00:28:49.621 --rc geninfo_unexecuted_blocks=1 00:28:49.621 00:28:49.621 ' 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:49.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.621 --rc genhtml_branch_coverage=1 00:28:49.621 --rc genhtml_function_coverage=1 00:28:49.621 --rc genhtml_legend=1 00:28:49.621 --rc geninfo_all_blocks=1 00:28:49.621 --rc geninfo_unexecuted_blocks=1 00:28:49.621 00:28:49.621 ' 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:49.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.621 --rc genhtml_branch_coverage=1 00:28:49.621 --rc genhtml_function_coverage=1 00:28:49.621 --rc genhtml_legend=1 00:28:49.621 --rc geninfo_all_blocks=1 00:28:49.621 --rc geninfo_unexecuted_blocks=1 00:28:49.621 00:28:49.621 ' 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.621 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:49.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:28:49.622 04:40:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:57.767 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:57.767 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:57.767 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:57.767 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.767 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:57.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:57.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:28:57.768 00:28:57.768 --- 10.0.0.2 ping statistics --- 00:28:57.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.768 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:57.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:57.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:28:57.768 00:28:57.768 --- 10.0.0.1 ping statistics --- 00:28:57.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:57.768 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3170528 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3170528 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3170528 ']' 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:57.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:57.768 04:40:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.768 [2024-11-05 04:40:10.532190] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:57.768 [2024-11-05 04:40:10.532294] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:57.768 [2024-11-05 04:40:10.635721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:57.768 [2024-11-05 04:40:10.687685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:57.768 [2024-11-05 04:40:10.687740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:57.768 [2024-11-05 04:40:10.687759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:57.768 [2024-11-05 04:40:10.687766] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:57.768 [2024-11-05 04:40:10.687772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:57.768 [2024-11-05 04:40:10.689576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:57.768 [2024-11-05 04:40:10.689625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.768 [2024-11-05 04:40:10.689626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.768 [2024-11-05 04:40:11.369187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:57.768 Malloc0 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:57.768 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:58.029 [2024-11-05 04:40:11.433719] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:58.029 { 00:28:58.029 "params": { 00:28:58.029 "name": "Nvme$subsystem", 00:28:58.029 "trtype": "$TEST_TRANSPORT", 00:28:58.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:58.029 "adrfam": "ipv4", 00:28:58.029 "trsvcid": "$NVMF_PORT", 00:28:58.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:58.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:58.029 "hdgst": ${hdgst:-false}, 00:28:58.029 "ddgst": ${ddgst:-false} 00:28:58.029 }, 00:28:58.029 "method": "bdev_nvme_attach_controller" 00:28:58.029 } 00:28:58.029 EOF 00:28:58.029 )") 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:58.029 04:40:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:58.029 "params": { 00:28:58.029 "name": "Nvme1", 00:28:58.029 "trtype": "tcp", 00:28:58.029 "traddr": "10.0.0.2", 00:28:58.029 "adrfam": "ipv4", 00:28:58.029 "trsvcid": "4420", 00:28:58.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:58.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:58.029 "hdgst": false, 00:28:58.029 "ddgst": false 00:28:58.029 }, 00:28:58.029 "method": "bdev_nvme_attach_controller" 00:28:58.029 }' 00:28:58.029 [2024-11-05 04:40:11.488778] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:58.029 [2024-11-05 04:40:11.488825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170692 ] 00:28:58.029 [2024-11-05 04:40:11.558008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.029 [2024-11-05 04:40:11.594344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.290 Running I/O for 1 seconds... 00:28:59.675 8849.00 IOPS, 34.57 MiB/s 00:28:59.675 Latency(us) 00:28:59.675 [2024-11-05T03:40:13.315Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.675 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:59.675 Verification LBA range: start 0x0 length 0x4000 00:28:59.675 Nvme1n1 : 1.01 8934.86 34.90 0.00 0.00 14233.40 2375.68 15400.96 00:28:59.675 [2024-11-05T03:40:13.315Z] =================================================================================================================== 00:28:59.675 [2024-11-05T03:40:13.315Z] Total : 8934.86 34.90 0.00 0.00 14233.40 2375.68 15400.96 00:28:59.675 04:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3170916 00:28:59.675 04:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:59.675 04:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:59.675 04:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:59.675 04:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:28:59.675 04:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:28:59.675 04:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.675 04:40:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.675 { 00:28:59.675 "params": { 00:28:59.675 "name": "Nvme$subsystem", 00:28:59.675 "trtype": "$TEST_TRANSPORT", 00:28:59.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.675 "adrfam": "ipv4", 00:28:59.675 "trsvcid": "$NVMF_PORT", 00:28:59.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.675 "hdgst": ${hdgst:-false}, 00:28:59.675 "ddgst": ${ddgst:-false} 00:28:59.675 }, 00:28:59.675 "method": "bdev_nvme_attach_controller" 00:28:59.675 } 00:28:59.675 EOF 00:28:59.675 )") 00:28:59.675 04:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:28:59.675 04:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:28:59.675 04:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:28:59.675 04:40:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:59.675 "params": { 00:28:59.675 "name": "Nvme1", 00:28:59.675 "trtype": "tcp", 00:28:59.675 "traddr": "10.0.0.2", 00:28:59.675 "adrfam": "ipv4", 00:28:59.675 "trsvcid": "4420", 00:28:59.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.675 "hdgst": false, 00:28:59.675 "ddgst": false 00:28:59.675 }, 00:28:59.675 "method": "bdev_nvme_attach_controller" 00:28:59.675 }' 00:28:59.675 [2024-11-05 04:40:13.046118] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:28:59.675 [2024-11-05 04:40:13.046177] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3170916 ] 00:28:59.675 [2024-11-05 04:40:13.116621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.675 [2024-11-05 04:40:13.152278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.936 Running I/O for 15 seconds... 00:29:02.261 11080.00 IOPS, 43.28 MiB/s [2024-11-05T03:40:16.164Z] 11098.50 IOPS, 43.35 MiB/s [2024-11-05T03:40:16.164Z] 04:40:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3170528 00:29:02.524 04:40:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:02.524 [2024-11-05 04:40:16.009403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.524 [2024-11-05 04:40:16.009445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.524 [2024-11-05 04:40:16.009467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.524 [2024-11-05 04:40:16.009477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.524 [2024-11-05 04:40:16.009487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.524 [2024-11-05 04:40:16.009497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.524 [2024-11-05 04:40:16.009510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.524 [2024-11-05 04:40:16.009518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.524 [2024-11-05 04:40:16.009530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.524 [2024-11-05 04:40:16.009537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.524 [2024-11-05 04:40:16.009547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.524 [2024-11-05 04:40:16.009554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.524 [2024-11-05 04:40:16.009565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.524 [2024-11-05 04:40:16.009572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.524 [2024-11-05 04:40:16.009582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.524 [2024-11-05 04:40:16.009591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.524 [2024-11-05 04:40:16.009602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.009992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.009999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:94032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.525 [2024-11-05 04:40:16.010382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.525 [2024-11-05 04:40:16.010389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.526 [2024-11-05 04:40:16.010677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.010985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.010994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.011001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.011011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.011018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.011028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.011035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.011044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.011056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.526 [2024-11-05 04:40:16.011066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.526 [2024-11-05 04:40:16.011073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.527 [2024-11-05 04:40:16.011725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.527 [2024-11-05 04:40:16.011734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020220 is same with the state(6) to be set 00:29:02.527 [2024-11-05 04:40:16.011744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:02.528 [2024-11-05 04:40:16.011753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:02.528 [2024-11-05 04:40:16.011760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94768 len:8 PRP1 0x0 PRP2 0x0 00:29:02.528 [2024-11-05 04:40:16.011768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.528 [2024-11-05 04:40:16.015339] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.015390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.016169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.016186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.016195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.528 [2024-11-05 04:40:16.016416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.016636] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.528 [2024-11-05 04:40:16.016645] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.528 [2024-11-05 04:40:16.016654] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.528 [2024-11-05 04:40:16.020199] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.528 [2024-11-05 04:40:16.029393] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.030070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.030109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.030121] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.528 [2024-11-05 04:40:16.030361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.030583] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.528 [2024-11-05 04:40:16.030592] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.528 [2024-11-05 04:40:16.030600] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.528 [2024-11-05 04:40:16.034156] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.528 [2024-11-05 04:40:16.043341] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.043925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.043964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.043976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.528 [2024-11-05 04:40:16.044217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.044439] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.528 [2024-11-05 04:40:16.044453] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.528 [2024-11-05 04:40:16.044461] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.528 [2024-11-05 04:40:16.048014] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.528 [2024-11-05 04:40:16.057206] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.057802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.057827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.057836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.528 [2024-11-05 04:40:16.058060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.058280] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.528 [2024-11-05 04:40:16.058289] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.528 [2024-11-05 04:40:16.058296] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.528 [2024-11-05 04:40:16.061846] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.528 [2024-11-05 04:40:16.071020] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.071678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.071717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.071728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.528 [2024-11-05 04:40:16.071974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.072198] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.528 [2024-11-05 04:40:16.072206] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.528 [2024-11-05 04:40:16.072214] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.528 [2024-11-05 04:40:16.075768] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.528 [2024-11-05 04:40:16.084956] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.085583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.085621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.085631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.528 [2024-11-05 04:40:16.085879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.086102] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.528 [2024-11-05 04:40:16.086115] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.528 [2024-11-05 04:40:16.086123] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.528 [2024-11-05 04:40:16.089669] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.528 [2024-11-05 04:40:16.098860] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.099526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.099563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.099574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.528 [2024-11-05 04:40:16.099820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.100044] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.528 [2024-11-05 04:40:16.100052] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.528 [2024-11-05 04:40:16.100060] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.528 [2024-11-05 04:40:16.103603] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.528 [2024-11-05 04:40:16.112786] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.113455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.113493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.113504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.528 [2024-11-05 04:40:16.113742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.113973] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.528 [2024-11-05 04:40:16.113983] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.528 [2024-11-05 04:40:16.113990] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.528 [2024-11-05 04:40:16.117620] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.528 [2024-11-05 04:40:16.126606] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.127116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.127154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.127166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.528 [2024-11-05 04:40:16.127410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.528 [2024-11-05 04:40:16.127644] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.528 [2024-11-05 04:40:16.127654] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.528 [2024-11-05 04:40:16.127661] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.528 [2024-11-05 04:40:16.131215] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.528 [2024-11-05 04:40:16.140404] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.528 [2024-11-05 04:40:16.141039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.528 [2024-11-05 04:40:16.141077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.528 [2024-11-05 04:40:16.141088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.529 [2024-11-05 04:40:16.141326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.529 [2024-11-05 04:40:16.141549] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.529 [2024-11-05 04:40:16.141557] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.529 [2024-11-05 04:40:16.141565] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.529 [2024-11-05 04:40:16.145119] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.529 [2024-11-05 04:40:16.154315] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.529 [2024-11-05 04:40:16.154894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.529 [2024-11-05 04:40:16.154932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.529 [2024-11-05 04:40:16.154944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.529 [2024-11-05 04:40:16.155185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.529 [2024-11-05 04:40:16.155407] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.529 [2024-11-05 04:40:16.155416] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.529 [2024-11-05 04:40:16.155424] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.529 [2024-11-05 04:40:16.158977] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.790 [2024-11-05 04:40:16.168164] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.790 [2024-11-05 04:40:16.168821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.790 [2024-11-05 04:40:16.168859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.790 [2024-11-05 04:40:16.168871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.790 [2024-11-05 04:40:16.169111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.790 [2024-11-05 04:40:16.169334] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.790 [2024-11-05 04:40:16.169348] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.790 [2024-11-05 04:40:16.169356] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.790 [2024-11-05 04:40:16.172913] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.790 [2024-11-05 04:40:16.182101] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.790 [2024-11-05 04:40:16.182675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.790 [2024-11-05 04:40:16.182693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.790 [2024-11-05 04:40:16.182701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.790 [2024-11-05 04:40:16.182926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.790 [2024-11-05 04:40:16.183145] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.790 [2024-11-05 04:40:16.183154] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.790 [2024-11-05 04:40:16.183161] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.790 [2024-11-05 04:40:16.186702] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.195891] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.196543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.196581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.791 [2024-11-05 04:40:16.196591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.791 [2024-11-05 04:40:16.196837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.791 [2024-11-05 04:40:16.197061] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.791 [2024-11-05 04:40:16.197069] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.791 [2024-11-05 04:40:16.197077] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.791 [2024-11-05 04:40:16.200619] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.209812] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.210441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.210479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.791 [2024-11-05 04:40:16.210490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.791 [2024-11-05 04:40:16.210728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.791 [2024-11-05 04:40:16.210957] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.791 [2024-11-05 04:40:16.210967] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.791 [2024-11-05 04:40:16.210974] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.791 [2024-11-05 04:40:16.214523] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.223713] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.224365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.224403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.791 [2024-11-05 04:40:16.224413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.791 [2024-11-05 04:40:16.224651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.791 [2024-11-05 04:40:16.224883] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.791 [2024-11-05 04:40:16.224893] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.791 [2024-11-05 04:40:16.224901] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.791 [2024-11-05 04:40:16.228455] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.237644] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.238366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.238403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.791 [2024-11-05 04:40:16.238416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.791 [2024-11-05 04:40:16.238654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.791 [2024-11-05 04:40:16.238885] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.791 [2024-11-05 04:40:16.238895] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.791 [2024-11-05 04:40:16.238903] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.791 [2024-11-05 04:40:16.242447] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.251457] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.252115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.252153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.791 [2024-11-05 04:40:16.252164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.791 [2024-11-05 04:40:16.252402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.791 [2024-11-05 04:40:16.252624] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.791 [2024-11-05 04:40:16.252633] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.791 [2024-11-05 04:40:16.252641] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.791 [2024-11-05 04:40:16.256201] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.265383] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.266065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.266103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.791 [2024-11-05 04:40:16.266115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.791 [2024-11-05 04:40:16.266354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.791 [2024-11-05 04:40:16.266577] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.791 [2024-11-05 04:40:16.266587] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.791 [2024-11-05 04:40:16.266594] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.791 [2024-11-05 04:40:16.270149] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.279336] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.279861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.279899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.791 [2024-11-05 04:40:16.279911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.791 [2024-11-05 04:40:16.280152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.791 [2024-11-05 04:40:16.280374] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.791 [2024-11-05 04:40:16.280383] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.791 [2024-11-05 04:40:16.280391] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.791 [2024-11-05 04:40:16.283944] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.293131] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.293797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.293835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.791 [2024-11-05 04:40:16.293847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.791 [2024-11-05 04:40:16.294087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.791 [2024-11-05 04:40:16.294309] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.791 [2024-11-05 04:40:16.294319] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.791 [2024-11-05 04:40:16.294327] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.791 [2024-11-05 04:40:16.297879] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.307066] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.307605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.307624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.791 [2024-11-05 04:40:16.307637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.791 [2024-11-05 04:40:16.307864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.791 [2024-11-05 04:40:16.308084] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.791 [2024-11-05 04:40:16.308092] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.791 [2024-11-05 04:40:16.308099] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.791 [2024-11-05 04:40:16.311643] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.791 [2024-11-05 04:40:16.321034] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.791 [2024-11-05 04:40:16.321646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.791 [2024-11-05 04:40:16.321684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.792 [2024-11-05 04:40:16.321696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.792 [2024-11-05 04:40:16.321946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.792 [2024-11-05 04:40:16.322169] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.792 [2024-11-05 04:40:16.322178] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.792 [2024-11-05 04:40:16.322185] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.792 [2024-11-05 04:40:16.325727] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.792 [2024-11-05 04:40:16.334926] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.792 [2024-11-05 04:40:16.335592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.792 [2024-11-05 04:40:16.335630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.792 [2024-11-05 04:40:16.335641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.792 [2024-11-05 04:40:16.335888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.792 [2024-11-05 04:40:16.336111] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.792 [2024-11-05 04:40:16.336120] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.792 [2024-11-05 04:40:16.336127] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.792 [2024-11-05 04:40:16.339672] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.792 [2024-11-05 04:40:16.348856] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.792 [2024-11-05 04:40:16.349386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.792 [2024-11-05 04:40:16.349424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.792 [2024-11-05 04:40:16.349436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.792 [2024-11-05 04:40:16.349678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.792 [2024-11-05 04:40:16.349910] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.792 [2024-11-05 04:40:16.349924] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.792 [2024-11-05 04:40:16.349932] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.792 [2024-11-05 04:40:16.353486] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.792 [2024-11-05 04:40:16.362676] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.792 [2024-11-05 04:40:16.363192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.792 [2024-11-05 04:40:16.363229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.792 [2024-11-05 04:40:16.363240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.792 [2024-11-05 04:40:16.363478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.792 [2024-11-05 04:40:16.363700] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.792 [2024-11-05 04:40:16.363708] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.792 [2024-11-05 04:40:16.363716] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.792 [2024-11-05 04:40:16.367270] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.792 [2024-11-05 04:40:16.376503] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.792 [2024-11-05 04:40:16.377072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.792 [2024-11-05 04:40:16.377108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.792 [2024-11-05 04:40:16.377120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.792 [2024-11-05 04:40:16.377358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.792 [2024-11-05 04:40:16.377581] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.792 [2024-11-05 04:40:16.377589] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.792 [2024-11-05 04:40:16.377597] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.792 [2024-11-05 04:40:16.381149] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.792 [2024-11-05 04:40:16.390342] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.792 [2024-11-05 04:40:16.390985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.792 [2024-11-05 04:40:16.391023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.792 [2024-11-05 04:40:16.391033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.792 [2024-11-05 04:40:16.391271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.792 [2024-11-05 04:40:16.391494] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.792 [2024-11-05 04:40:16.391502] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.792 [2024-11-05 04:40:16.391510] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.792 [2024-11-05 04:40:16.395066] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.792 [2024-11-05 04:40:16.404253] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.792 [2024-11-05 04:40:16.404852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.792 [2024-11-05 04:40:16.404890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.792 [2024-11-05 04:40:16.404902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.792 [2024-11-05 04:40:16.405141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.792 [2024-11-05 04:40:16.405364] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.792 [2024-11-05 04:40:16.405373] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.792 [2024-11-05 04:40:16.405381] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.792 [2024-11-05 04:40:16.408935] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:02.792 [2024-11-05 04:40:16.418124] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:02.792 [2024-11-05 04:40:16.418784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.792 [2024-11-05 04:40:16.418822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:02.792 [2024-11-05 04:40:16.418835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:02.792 [2024-11-05 04:40:16.419074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:02.792 [2024-11-05 04:40:16.419296] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:02.792 [2024-11-05 04:40:16.419305] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:02.792 [2024-11-05 04:40:16.419313] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:02.792 [2024-11-05 04:40:16.422864] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.054 [2024-11-05 04:40:16.432059] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.054 [2024-11-05 04:40:16.432737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-11-05 04:40:16.432782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.054 [2024-11-05 04:40:16.432794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.054 [2024-11-05 04:40:16.433033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.054 [2024-11-05 04:40:16.433255] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.054 [2024-11-05 04:40:16.433264] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.054 [2024-11-05 04:40:16.433272] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.054 [2024-11-05 04:40:16.436821] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.054 [2024-11-05 04:40:16.446008] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.054 [2024-11-05 04:40:16.446703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-11-05 04:40:16.446741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.054 [2024-11-05 04:40:16.446762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.054 [2024-11-05 04:40:16.447001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.054 [2024-11-05 04:40:16.447224] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.054 [2024-11-05 04:40:16.447233] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.054 [2024-11-05 04:40:16.447241] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.054 [2024-11-05 04:40:16.450788] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.054 [2024-11-05 04:40:16.459989] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.054 [2024-11-05 04:40:16.460636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-11-05 04:40:16.460674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.054 [2024-11-05 04:40:16.460686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.054 [2024-11-05 04:40:16.460935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.054 [2024-11-05 04:40:16.461158] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.054 [2024-11-05 04:40:16.461166] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.054 [2024-11-05 04:40:16.461174] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.054 [2024-11-05 04:40:16.464717] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.054 9367.67 IOPS, 36.59 MiB/s [2024-11-05T03:40:16.694Z] [2024-11-05 04:40:16.475572] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.054 [2024-11-05 04:40:16.476121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-11-05 04:40:16.476141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.054 [2024-11-05 04:40:16.476149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.054 [2024-11-05 04:40:16.476369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.054 [2024-11-05 04:40:16.476588] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.054 [2024-11-05 04:40:16.476597] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.054 [2024-11-05 04:40:16.476604] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.054 [2024-11-05 04:40:16.480146] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.054 [2024-11-05 04:40:16.489541] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.054 [2024-11-05 04:40:16.489964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.054 [2024-11-05 04:40:16.489983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.054 [2024-11-05 04:40:16.489995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.054 [2024-11-05 04:40:16.490215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.054 [2024-11-05 04:40:16.490434] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.054 [2024-11-05 04:40:16.490443] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.054 [2024-11-05 04:40:16.490450] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.054 [2024-11-05 04:40:16.493995] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.054 [2024-11-05 04:40:16.503473] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.504099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.504137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.504148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.504386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.504608] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.504618] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.504626] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.508178] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.055 [2024-11-05 04:40:16.517370] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.518036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.518074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.518086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.518327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.518550] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.518559] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.518567] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.522121] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.055 [2024-11-05 04:40:16.531322] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.531850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.531888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.531900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.532139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.532366] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.532376] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.532383] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.535937] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.055 [2024-11-05 04:40:16.545125] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.545824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.545862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.545875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.546116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.546339] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.546348] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.546355] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.549908] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.055 [2024-11-05 04:40:16.559103] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.559772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.559811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.559823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.560062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.560285] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.560294] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.560302] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.563878] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.055 [2024-11-05 04:40:16.573076] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.573757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.573795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.573805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.574043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.574265] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.574274] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.574286] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.577837] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.055 [2024-11-05 04:40:16.587040] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.587715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.587760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.587773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.588012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.588235] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.588244] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.588251] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.591798] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.055 [2024-11-05 04:40:16.600985] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.601529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.601547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.601556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.601780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.601999] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.602008] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.602015] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.605552] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.055 [2024-11-05 04:40:16.614945] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.615512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.615529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.615536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.615759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.615979] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.615987] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.615994] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.619532] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.055 [2024-11-05 04:40:16.628925] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.055 [2024-11-05 04:40:16.629511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.055 [2024-11-05 04:40:16.629527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.055 [2024-11-05 04:40:16.629534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.055 [2024-11-05 04:40:16.629768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.055 [2024-11-05 04:40:16.629989] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.055 [2024-11-05 04:40:16.629997] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.055 [2024-11-05 04:40:16.630004] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.055 [2024-11-05 04:40:16.633542] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.056 [2024-11-05 04:40:16.642724] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.056 [2024-11-05 04:40:16.643400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.056 [2024-11-05 04:40:16.643438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.056 [2024-11-05 04:40:16.643449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.056 [2024-11-05 04:40:16.643687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.056 [2024-11-05 04:40:16.643917] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.056 [2024-11-05 04:40:16.643927] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.056 [2024-11-05 04:40:16.643934] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.056 [2024-11-05 04:40:16.647481] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.056 [2024-11-05 04:40:16.656684] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.056 [2024-11-05 04:40:16.657236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.056 [2024-11-05 04:40:16.657255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.056 [2024-11-05 04:40:16.657263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.056 [2024-11-05 04:40:16.657482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.056 [2024-11-05 04:40:16.657701] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.056 [2024-11-05 04:40:16.657709] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.056 [2024-11-05 04:40:16.657716] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.056 [2024-11-05 04:40:16.661260] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.056 [2024-11-05 04:40:16.670648] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.056 [2024-11-05 04:40:16.671189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.056 [2024-11-05 04:40:16.671205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.056 [2024-11-05 04:40:16.671218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.056 [2024-11-05 04:40:16.671437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.056 [2024-11-05 04:40:16.671655] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.056 [2024-11-05 04:40:16.671664] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.056 [2024-11-05 04:40:16.671671] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.056 [2024-11-05 04:40:16.675214] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.056 [2024-11-05 04:40:16.684604] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.056 [2024-11-05 04:40:16.685151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.056 [2024-11-05 04:40:16.685168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.056 [2024-11-05 04:40:16.685175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.056 [2024-11-05 04:40:16.685393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.056 [2024-11-05 04:40:16.685611] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.056 [2024-11-05 04:40:16.685620] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.056 [2024-11-05 04:40:16.685627] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.056 [2024-11-05 04:40:16.689167] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.318 [2024-11-05 04:40:16.698569] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.318 [2024-11-05 04:40:16.699113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.318 [2024-11-05 04:40:16.699129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.318 [2024-11-05 04:40:16.699137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.318 [2024-11-05 04:40:16.699355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.318 [2024-11-05 04:40:16.699574] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.318 [2024-11-05 04:40:16.699583] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.318 [2024-11-05 04:40:16.699590] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.318 [2024-11-05 04:40:16.703132] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.318 [2024-11-05 04:40:16.712522] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.318 [2024-11-05 04:40:16.712936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.318 [2024-11-05 04:40:16.712954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.318 [2024-11-05 04:40:16.712962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.318 [2024-11-05 04:40:16.713180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.318 [2024-11-05 04:40:16.713403] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.318 [2024-11-05 04:40:16.713411] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.318 [2024-11-05 04:40:16.713418] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.318 [2024-11-05 04:40:16.716963] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.318 [2024-11-05 04:40:16.726354] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.318 [2024-11-05 04:40:16.726794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.318 [2024-11-05 04:40:16.726810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.318 [2024-11-05 04:40:16.726818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.318 [2024-11-05 04:40:16.727037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.318 [2024-11-05 04:40:16.727255] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.318 [2024-11-05 04:40:16.727264] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.318 [2024-11-05 04:40:16.727271] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.318 [2024-11-05 04:40:16.730824] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.318 [2024-11-05 04:40:16.740218] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.318 [2024-11-05 04:40:16.740793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.318 [2024-11-05 04:40:16.740809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.318 [2024-11-05 04:40:16.740816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.318 [2024-11-05 04:40:16.741035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.318 [2024-11-05 04:40:16.741253] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.318 [2024-11-05 04:40:16.741261] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.318 [2024-11-05 04:40:16.741268] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.318 [2024-11-05 04:40:16.744811] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.318 [2024-11-05 04:40:16.754211] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.754778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.754816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.754828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.755069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.755291] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.755300] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.755312] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.758866] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.768055] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.768578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.768616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.768627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.768872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.769095] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.769104] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.769113] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.772657] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.781852] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.782482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.782520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.782532] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.782779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.783002] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.783011] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.783019] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.786565] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.795762] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.796432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.796470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.796480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.796719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.796951] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.796961] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.796969] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.800515] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.809708] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.810388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.810426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.810437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.810675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.810905] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.810915] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.810922] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.814467] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.823661] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.824347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.824385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.824396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.824633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.824863] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.824873] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.824881] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.828426] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.837625] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.838173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.838192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.838200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.838419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.838638] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.838646] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.838654] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.842199] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.851590] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.852211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.852248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.852264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.852502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.852724] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.852733] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.852740] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.856306] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.865498] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.866598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.866630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.866641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.866886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.867110] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.867119] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.867126] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.870672] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.879446] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.319 [2024-11-05 04:40:16.880005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.319 [2024-11-05 04:40:16.880024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.319 [2024-11-05 04:40:16.880033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.319 [2024-11-05 04:40:16.880252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.319 [2024-11-05 04:40:16.880471] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.319 [2024-11-05 04:40:16.880480] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.319 [2024-11-05 04:40:16.880488] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.319 [2024-11-05 04:40:16.884031] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.319 [2024-11-05 04:40:16.893245] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.320 [2024-11-05 04:40:16.893773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.320 [2024-11-05 04:40:16.893789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.320 [2024-11-05 04:40:16.893797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.320 [2024-11-05 04:40:16.894016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.320 [2024-11-05 04:40:16.894240] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.320 [2024-11-05 04:40:16.894249] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.320 [2024-11-05 04:40:16.894256] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.320 [2024-11-05 04:40:16.897801] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.320 [2024-11-05 04:40:16.907188] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.320 [2024-11-05 04:40:16.907642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.320 [2024-11-05 04:40:16.907657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.320 [2024-11-05 04:40:16.907665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.320 [2024-11-05 04:40:16.907888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.320 [2024-11-05 04:40:16.908107] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.320 [2024-11-05 04:40:16.908116] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.320 [2024-11-05 04:40:16.908123] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.320 [2024-11-05 04:40:16.911659] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.320 [2024-11-05 04:40:16.921051] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.320 [2024-11-05 04:40:16.921716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.320 [2024-11-05 04:40:16.921762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.320 [2024-11-05 04:40:16.921775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.320 [2024-11-05 04:40:16.922017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.320 [2024-11-05 04:40:16.922239] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.320 [2024-11-05 04:40:16.922248] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.320 [2024-11-05 04:40:16.922256] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.320 [2024-11-05 04:40:16.925801] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.320 [2024-11-05 04:40:16.935002] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.320 [2024-11-05 04:40:16.935718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.320 [2024-11-05 04:40:16.935764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.320 [2024-11-05 04:40:16.935775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.320 [2024-11-05 04:40:16.936013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.320 [2024-11-05 04:40:16.936236] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.320 [2024-11-05 04:40:16.936245] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.320 [2024-11-05 04:40:16.936257] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.320 [2024-11-05 04:40:16.939806] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.320 [2024-11-05 04:40:16.948995] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.320 [2024-11-05 04:40:16.949544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.320 [2024-11-05 04:40:16.949563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.320 [2024-11-05 04:40:16.949571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.320 [2024-11-05 04:40:16.949796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.320 [2024-11-05 04:40:16.950016] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.320 [2024-11-05 04:40:16.950024] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.320 [2024-11-05 04:40:16.950031] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.320 [2024-11-05 04:40:16.953568] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.582 [2024-11-05 04:40:16.962982] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.582 [2024-11-05 04:40:16.963480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.582 [2024-11-05 04:40:16.963496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.582 [2024-11-05 04:40:16.963504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.582 [2024-11-05 04:40:16.963722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.582 [2024-11-05 04:40:16.963947] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.582 [2024-11-05 04:40:16.963956] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.582 [2024-11-05 04:40:16.963963] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.582 [2024-11-05 04:40:16.967497] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.582 [2024-11-05 04:40:16.976892] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.582 [2024-11-05 04:40:16.977524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.582 [2024-11-05 04:40:16.977562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.582 [2024-11-05 04:40:16.977573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.582 [2024-11-05 04:40:16.977819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.582 [2024-11-05 04:40:16.978042] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.582 [2024-11-05 04:40:16.978050] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.582 [2024-11-05 04:40:16.978058] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.582 [2024-11-05 04:40:16.981604] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.582 [2024-11-05 04:40:16.990804] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.582 [2024-11-05 04:40:16.991424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.582 [2024-11-05 04:40:16.991461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.582 [2024-11-05 04:40:16.991473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.582 [2024-11-05 04:40:16.991713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.582 [2024-11-05 04:40:16.991944] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.582 [2024-11-05 04:40:16.991953] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.582 [2024-11-05 04:40:16.991960] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.582 [2024-11-05 04:40:16.995505] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.582 [2024-11-05 04:40:17.004691] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.582 [2024-11-05 04:40:17.005259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.582 [2024-11-05 04:40:17.005279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.582 [2024-11-05 04:40:17.005287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.582 [2024-11-05 04:40:17.005506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.582 [2024-11-05 04:40:17.005725] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.582 [2024-11-05 04:40:17.005734] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.582 [2024-11-05 04:40:17.005741] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.582 [2024-11-05 04:40:17.009285] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.582 [2024-11-05 04:40:17.018676] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.582 [2024-11-05 04:40:17.019199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.582 [2024-11-05 04:40:17.019215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.582 [2024-11-05 04:40:17.019223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.582 [2024-11-05 04:40:17.019442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.582 [2024-11-05 04:40:17.019660] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.582 [2024-11-05 04:40:17.019669] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.582 [2024-11-05 04:40:17.019676] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.582 [2024-11-05 04:40:17.023220] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.582 [2024-11-05 04:40:17.032628] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.582 [2024-11-05 04:40:17.033174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.582 [2024-11-05 04:40:17.033190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.582 [2024-11-05 04:40:17.033203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.582 [2024-11-05 04:40:17.033422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.582 [2024-11-05 04:40:17.033640] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.582 [2024-11-05 04:40:17.033649] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.582 [2024-11-05 04:40:17.033657] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.582 [2024-11-05 04:40:17.037201] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.582 [2024-11-05 04:40:17.046467] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.582 [2024-11-05 04:40:17.047102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.582 [2024-11-05 04:40:17.047141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.582 [2024-11-05 04:40:17.047152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.582 [2024-11-05 04:40:17.047390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.582 [2024-11-05 04:40:17.047612] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.582 [2024-11-05 04:40:17.047621] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.047628] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.051182] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.060384] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.060973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.060992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.061000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.061220] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.061439] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.061447] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.061454] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.064997] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.074184] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.074719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.074735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.074743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.074969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.075193] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.075201] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.075208] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.078787] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.087993] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.088608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.088646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.088656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.088902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.089125] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.089134] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.089142] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.092685] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.101878] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.102543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.102581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.102592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.102837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.103061] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.103069] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.103077] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.106621] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.115819] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.116464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.116502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.116513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.116760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.116983] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.116992] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.117004] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.120549] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.129742] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.130305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.130324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.130332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.130551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.130775] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.130792] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.130800] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.134348] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.143563] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.144199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.144237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.144248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.144485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.144708] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.144718] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.144725] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.148278] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.157478] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.158061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.158080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.158088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.158307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.158526] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.158535] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.158542] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.162085] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.171283] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.171848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.171865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.171872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.172091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.172310] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.172319] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.583 [2024-11-05 04:40:17.172326] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.583 [2024-11-05 04:40:17.175868] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.583 [2024-11-05 04:40:17.185255] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.583 [2024-11-05 04:40:17.186281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.583 [2024-11-05 04:40:17.186305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.583 [2024-11-05 04:40:17.186313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.583 [2024-11-05 04:40:17.186539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.583 [2024-11-05 04:40:17.186766] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.583 [2024-11-05 04:40:17.186776] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.584 [2024-11-05 04:40:17.186784] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.584 [2024-11-05 04:40:17.190330] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.584 [2024-11-05 04:40:17.199119] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.584 [2024-11-05 04:40:17.199799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.584 [2024-11-05 04:40:17.199838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.584 [2024-11-05 04:40:17.199850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.584 [2024-11-05 04:40:17.200092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.584 [2024-11-05 04:40:17.200316] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.584 [2024-11-05 04:40:17.200325] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.584 [2024-11-05 04:40:17.200332] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.584 [2024-11-05 04:40:17.203887] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.584 [2024-11-05 04:40:17.213078] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.584 [2024-11-05 04:40:17.213721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.584 [2024-11-05 04:40:17.213766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.584 [2024-11-05 04:40:17.213783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.584 [2024-11-05 04:40:17.214023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.584 [2024-11-05 04:40:17.214246] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.584 [2024-11-05 04:40:17.214255] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.584 [2024-11-05 04:40:17.214263] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.584 [2024-11-05 04:40:17.217810] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.846 [2024-11-05 04:40:17.227002] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.846 [2024-11-05 04:40:17.227637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.846 [2024-11-05 04:40:17.227674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.846 [2024-11-05 04:40:17.227685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.846 [2024-11-05 04:40:17.227931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.846 [2024-11-05 04:40:17.228154] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.846 [2024-11-05 04:40:17.228162] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.846 [2024-11-05 04:40:17.228170] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.846 [2024-11-05 04:40:17.231727] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.846 [2024-11-05 04:40:17.241123] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.846 [2024-11-05 04:40:17.241828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.846 [2024-11-05 04:40:17.241867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.846 [2024-11-05 04:40:17.241879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.846 [2024-11-05 04:40:17.242118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.846 [2024-11-05 04:40:17.242341] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.846 [2024-11-05 04:40:17.242350] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.846 [2024-11-05 04:40:17.242357] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.846 [2024-11-05 04:40:17.245911] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.846 [2024-11-05 04:40:17.255108] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.846 [2024-11-05 04:40:17.255689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.846 [2024-11-05 04:40:17.255708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.846 [2024-11-05 04:40:17.255716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.846 [2024-11-05 04:40:17.255942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.846 [2024-11-05 04:40:17.256167] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.846 [2024-11-05 04:40:17.256175] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.846 [2024-11-05 04:40:17.256182] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.846 [2024-11-05 04:40:17.259721] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.846 [2024-11-05 04:40:17.268952] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.846 [2024-11-05 04:40:17.269586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.846 [2024-11-05 04:40:17.269623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.846 [2024-11-05 04:40:17.269636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.846 [2024-11-05 04:40:17.269883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.846 [2024-11-05 04:40:17.270107] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.846 [2024-11-05 04:40:17.270116] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.846 [2024-11-05 04:40:17.270124] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.846 [2024-11-05 04:40:17.273669] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.846 [2024-11-05 04:40:17.282858] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.846 [2024-11-05 04:40:17.283466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.846 [2024-11-05 04:40:17.283504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.846 [2024-11-05 04:40:17.283514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.846 [2024-11-05 04:40:17.283760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.846 [2024-11-05 04:40:17.283984] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.846 [2024-11-05 04:40:17.283993] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.846 [2024-11-05 04:40:17.284000] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.846 [2024-11-05 04:40:17.287543] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.846 [2024-11-05 04:40:17.296728] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.846 [2024-11-05 04:40:17.297400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.846 [2024-11-05 04:40:17.297437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.846 [2024-11-05 04:40:17.297448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.846 [2024-11-05 04:40:17.297685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.846 [2024-11-05 04:40:17.297918] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.846 [2024-11-05 04:40:17.297928] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.846 [2024-11-05 04:40:17.297940] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.846 [2024-11-05 04:40:17.301486] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.846 [2024-11-05 04:40:17.310672] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.846 [2024-11-05 04:40:17.311217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.846 [2024-11-05 04:40:17.311237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.846 [2024-11-05 04:40:17.311245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.846 [2024-11-05 04:40:17.311464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.846 [2024-11-05 04:40:17.311682] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.846 [2024-11-05 04:40:17.311690] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.846 [2024-11-05 04:40:17.311697] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.846 [2024-11-05 04:40:17.315241] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.324646] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.325186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.325203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.325210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.325429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.325647] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.325656] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.325663] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.329207] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.338611] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.339155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.339193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.339205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.339444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.339667] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.339676] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.339684] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.343236] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.352432] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.353075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.353113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.353124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.353361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.353584] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.353593] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.353600] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.357161] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.366346] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.367035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.367073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.367083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.367321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.367544] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.367552] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.367560] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.371114] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.380295] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.380724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.380754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.380769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.380992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.381212] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.381220] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.381227] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.384767] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.394148] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.394710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.394726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.394738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.394963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.395182] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.395190] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.395197] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.398733] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.408112] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.408676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.408691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.408699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.408923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.409142] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.409149] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.409156] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.412689] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.422068] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.422605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.422621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.422629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.422853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.423073] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.423082] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.423089] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.426627] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.436021] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.436686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.436723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.436734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.436980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.437208] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.437217] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.437224] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.440771] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.847 [2024-11-05 04:40:17.449952] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.847 [2024-11-05 04:40:17.450579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.847 [2024-11-05 04:40:17.450617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.847 [2024-11-05 04:40:17.450628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.847 [2024-11-05 04:40:17.450875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.847 [2024-11-05 04:40:17.451098] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.847 [2024-11-05 04:40:17.451107] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.847 [2024-11-05 04:40:17.451114] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.847 [2024-11-05 04:40:17.454666] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.848 [2024-11-05 04:40:17.463854] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.848 [2024-11-05 04:40:17.464480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.848 [2024-11-05 04:40:17.464518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.848 [2024-11-05 04:40:17.464528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.848 [2024-11-05 04:40:17.464776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.848 [2024-11-05 04:40:17.465000] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.848 [2024-11-05 04:40:17.465009] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.848 [2024-11-05 04:40:17.465017] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:03.848 [2024-11-05 04:40:17.468558] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:03.848 7025.75 IOPS, 27.44 MiB/s [2024-11-05T03:40:17.488Z] [2024-11-05 04:40:17.478993] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:03.848 [2024-11-05 04:40:17.479623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:03.848 [2024-11-05 04:40:17.479661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:03.848 [2024-11-05 04:40:17.479671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:03.848 [2024-11-05 04:40:17.479920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:03.848 [2024-11-05 04:40:17.480143] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:03.848 [2024-11-05 04:40:17.480152] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:03.848 [2024-11-05 04:40:17.480163] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.110 [2024-11-05 04:40:17.483710] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.110 [2024-11-05 04:40:17.492905] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.110 [2024-11-05 04:40:17.493459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.110 [2024-11-05 04:40:17.493497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.110 [2024-11-05 04:40:17.493509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.110 [2024-11-05 04:40:17.493757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.110 [2024-11-05 04:40:17.493981] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.110 [2024-11-05 04:40:17.493989] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.110 [2024-11-05 04:40:17.493997] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.110 [2024-11-05 04:40:17.497541] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.110 [2024-11-05 04:40:17.506721] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.110 [2024-11-05 04:40:17.507319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.110 [2024-11-05 04:40:17.507358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.110 [2024-11-05 04:40:17.507369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.110 [2024-11-05 04:40:17.507607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.110 [2024-11-05 04:40:17.507838] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.110 [2024-11-05 04:40:17.507848] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.110 [2024-11-05 04:40:17.507855] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.110 [2024-11-05 04:40:17.511398] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.110 [2024-11-05 04:40:17.520578] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.110 [2024-11-05 04:40:17.521041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.110 [2024-11-05 04:40:17.521061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.110 [2024-11-05 04:40:17.521069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.110 [2024-11-05 04:40:17.521288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.110 [2024-11-05 04:40:17.521506] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.110 [2024-11-05 04:40:17.521515] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.110 [2024-11-05 04:40:17.521522] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.110 [2024-11-05 04:40:17.525071] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.110 [2024-11-05 04:40:17.534484] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.110 [2024-11-05 04:40:17.535011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.110 [2024-11-05 04:40:17.535029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.110 [2024-11-05 04:40:17.535036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.110 [2024-11-05 04:40:17.535255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.110 [2024-11-05 04:40:17.535474] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.110 [2024-11-05 04:40:17.535482] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.110 [2024-11-05 04:40:17.535490] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.110 [2024-11-05 04:40:17.539032] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.110 [2024-11-05 04:40:17.548412] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.110 [2024-11-05 04:40:17.549032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.110 [2024-11-05 04:40:17.549070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.110 [2024-11-05 04:40:17.549081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.110 [2024-11-05 04:40:17.549319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.110 [2024-11-05 04:40:17.549541] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.110 [2024-11-05 04:40:17.549550] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.110 [2024-11-05 04:40:17.549557] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.553105] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.562301] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.562950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.562988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.562998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.563236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.563459] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.563467] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.563475] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.567029] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.576207] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.576880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.576918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.576933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.577171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.577395] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.577403] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.577410] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.580964] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.590151] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.590723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.590766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.590778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.591016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.591239] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.591247] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.591255] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.594800] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.603979] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.604608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.604646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.604657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.604904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.605128] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.605137] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.605144] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.608687] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.617874] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.618547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.618584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.618594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.618842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.619070] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.619079] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.619086] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.622629] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.631812] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.632491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.632529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.632539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.632786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.633020] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.633030] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.633038] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.636582] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.645763] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.646419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.646457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.646468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.646706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.646938] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.646947] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.646955] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.650496] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.659686] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.660273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.660293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.660301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.660520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.660739] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.660753] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.660765] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.664301] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.673479] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.674012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.674029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.674037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.674255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.674474] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.674482] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.674489] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.111 [2024-11-05 04:40:17.678029] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.111 [2024-11-05 04:40:17.687408] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.111 [2024-11-05 04:40:17.688112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.111 [2024-11-05 04:40:17.688150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.111 [2024-11-05 04:40:17.688161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.111 [2024-11-05 04:40:17.688399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.111 [2024-11-05 04:40:17.688621] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.111 [2024-11-05 04:40:17.688630] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.111 [2024-11-05 04:40:17.688638] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.112 [2024-11-05 04:40:17.692189] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.112 [2024-11-05 04:40:17.701369] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.112 [2024-11-05 04:40:17.701838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.112 [2024-11-05 04:40:17.701858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.112 [2024-11-05 04:40:17.701866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.112 [2024-11-05 04:40:17.702086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.112 [2024-11-05 04:40:17.702305] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.112 [2024-11-05 04:40:17.702313] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.112 [2024-11-05 04:40:17.702320] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.112 [2024-11-05 04:40:17.705862] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.112 [2024-11-05 04:40:17.715254] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.112 [2024-11-05 04:40:17.715913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.112 [2024-11-05 04:40:17.715951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.112 [2024-11-05 04:40:17.715963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.112 [2024-11-05 04:40:17.716204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.112 [2024-11-05 04:40:17.716427] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.112 [2024-11-05 04:40:17.716435] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.112 [2024-11-05 04:40:17.716443] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.112 [2024-11-05 04:40:17.719992] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.112 [2024-11-05 04:40:17.729174] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.112 [2024-11-05 04:40:17.729791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.112 [2024-11-05 04:40:17.729829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.112 [2024-11-05 04:40:17.729840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.112 [2024-11-05 04:40:17.730078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.112 [2024-11-05 04:40:17.730301] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.112 [2024-11-05 04:40:17.730309] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.112 [2024-11-05 04:40:17.730316] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.112 [2024-11-05 04:40:17.733878] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.112 [2024-11-05 04:40:17.743076] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.112 [2024-11-05 04:40:17.743761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.112 [2024-11-05 04:40:17.743799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.112 [2024-11-05 04:40:17.743811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.112 [2024-11-05 04:40:17.744053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.112 [2024-11-05 04:40:17.744275] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.112 [2024-11-05 04:40:17.744283] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.112 [2024-11-05 04:40:17.744291] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.374 [2024-11-05 04:40:17.747845] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.374 [2024-11-05 04:40:17.757051] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.374 [2024-11-05 04:40:17.757716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.374 [2024-11-05 04:40:17.757761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.374 [2024-11-05 04:40:17.757777] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.374 [2024-11-05 04:40:17.758015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.374 [2024-11-05 04:40:17.758238] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.374 [2024-11-05 04:40:17.758246] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.374 [2024-11-05 04:40:17.758254] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.374 [2024-11-05 04:40:17.761803] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.374 [2024-11-05 04:40:17.770987] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.374 [2024-11-05 04:40:17.771652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.374 [2024-11-05 04:40:17.771690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.374 [2024-11-05 04:40:17.771701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.374 [2024-11-05 04:40:17.771949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.374 [2024-11-05 04:40:17.772172] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.374 [2024-11-05 04:40:17.772181] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.374 [2024-11-05 04:40:17.772189] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.374 [2024-11-05 04:40:17.775733] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.374 [2024-11-05 04:40:17.784931] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.374 [2024-11-05 04:40:17.785513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.374 [2024-11-05 04:40:17.785531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.374 [2024-11-05 04:40:17.785540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.374 [2024-11-05 04:40:17.785765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.374 [2024-11-05 04:40:17.785985] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.374 [2024-11-05 04:40:17.785992] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.374 [2024-11-05 04:40:17.785999] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.374 [2024-11-05 04:40:17.789536] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.374 [2024-11-05 04:40:17.798720] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.374 [2024-11-05 04:40:17.799379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.374 [2024-11-05 04:40:17.799419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.374 [2024-11-05 04:40:17.799429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.374 [2024-11-05 04:40:17.799667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.374 [2024-11-05 04:40:17.799906] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.374 [2024-11-05 04:40:17.799916] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.374 [2024-11-05 04:40:17.799924] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.803468] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.812653] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.813222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.813260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.813272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.813510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.813733] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.813743] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.813761] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.817308] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.826490] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.827070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.827089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.827098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.827317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.827537] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.827546] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.827553] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.831096] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.840287] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.840968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.841007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.841018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.841256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.841479] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.841489] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.841505] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.845058] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.854254] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.854889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.854928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.854940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.855181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.855404] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.855415] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.855423] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.858978] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.868161] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.868832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.868871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.868883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.869123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.869346] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.869355] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.869363] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.872918] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.882103] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.882768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.882807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.882818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.883056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.883280] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.883290] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.883298] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.886849] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.896039] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.896599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.896636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.896647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.896896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.897121] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.897131] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.897139] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.900682] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.909867] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.910528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.910566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.910577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.910825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.911049] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.911059] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.911066] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.914609] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.923795] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.924465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.924504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.924514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.924763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.924988] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.924998] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.925005] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.375 [2024-11-05 04:40:17.928549] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.375 [2024-11-05 04:40:17.937742] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.375 [2024-11-05 04:40:17.938380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.375 [2024-11-05 04:40:17.938424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.375 [2024-11-05 04:40:17.938435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.375 [2024-11-05 04:40:17.938673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.375 [2024-11-05 04:40:17.938907] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.375 [2024-11-05 04:40:17.938918] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.375 [2024-11-05 04:40:17.938926] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.376 [2024-11-05 04:40:17.942474] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.376 [2024-11-05 04:40:17.951674] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.376 [2024-11-05 04:40:17.952352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.376 [2024-11-05 04:40:17.952391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.376 [2024-11-05 04:40:17.952402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.376 [2024-11-05 04:40:17.952639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.376 [2024-11-05 04:40:17.952873] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.376 [2024-11-05 04:40:17.952883] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.376 [2024-11-05 04:40:17.952892] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.376 [2024-11-05 04:40:17.956453] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.376 [2024-11-05 04:40:17.965648] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.376 [2024-11-05 04:40:17.966228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.376 [2024-11-05 04:40:17.966248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.376 [2024-11-05 04:40:17.966257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.376 [2024-11-05 04:40:17.966477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.376 [2024-11-05 04:40:17.966696] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.376 [2024-11-05 04:40:17.966706] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.376 [2024-11-05 04:40:17.966713] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.376 [2024-11-05 04:40:17.970263] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.376 [2024-11-05 04:40:17.979449] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.376 [2024-11-05 04:40:17.979972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.376 [2024-11-05 04:40:17.979990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.376 [2024-11-05 04:40:17.979998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.376 [2024-11-05 04:40:17.980217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.376 [2024-11-05 04:40:17.980442] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.376 [2024-11-05 04:40:17.980452] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.376 [2024-11-05 04:40:17.980459] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.376 [2024-11-05 04:40:17.984005] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.376 [2024-11-05 04:40:17.993396] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.376 [2024-11-05 04:40:17.994045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.376 [2024-11-05 04:40:17.994083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.376 [2024-11-05 04:40:17.994094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.376 [2024-11-05 04:40:17.994333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.376 [2024-11-05 04:40:17.994557] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.376 [2024-11-05 04:40:17.994566] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.376 [2024-11-05 04:40:17.994574] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.376 [2024-11-05 04:40:17.998130] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.376 [2024-11-05 04:40:18.007328] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.376 [2024-11-05 04:40:18.007967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.376 [2024-11-05 04:40:18.008007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.376 [2024-11-05 04:40:18.008019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.376 [2024-11-05 04:40:18.008258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.376 [2024-11-05 04:40:18.008482] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.376 [2024-11-05 04:40:18.008491] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.376 [2024-11-05 04:40:18.008499] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.638 [2024-11-05 04:40:18.012056] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.638 [2024-11-05 04:40:18.021253] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.638 [2024-11-05 04:40:18.021956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.638 [2024-11-05 04:40:18.021996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.638 [2024-11-05 04:40:18.022007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.638 [2024-11-05 04:40:18.022244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.638 [2024-11-05 04:40:18.022468] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.638 [2024-11-05 04:40:18.022478] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.638 [2024-11-05 04:40:18.022491] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.638 [2024-11-05 04:40:18.026041] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.638 [2024-11-05 04:40:18.035253] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.638 [2024-11-05 04:40:18.035857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.638 [2024-11-05 04:40:18.035896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.638 [2024-11-05 04:40:18.035909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.638 [2024-11-05 04:40:18.036150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.638 [2024-11-05 04:40:18.036373] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.638 [2024-11-05 04:40:18.036382] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.638 [2024-11-05 04:40:18.036390] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.039949] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.049139] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.049728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.049754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.049763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.049983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.050202] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.050211] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.639 [2024-11-05 04:40:18.050219] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.053771] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.062953] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.063598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.063637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.063647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.063896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.064121] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.064130] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.639 [2024-11-05 04:40:18.064138] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.067685] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.076741] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.077434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.077473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.077484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.077722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.077957] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.077968] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.639 [2024-11-05 04:40:18.077976] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.081522] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.090708] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.091251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.091290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.091301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.091539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.091773] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.091783] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.639 [2024-11-05 04:40:18.091791] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.095335] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.104596] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.105237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.105276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.105287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.105525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.105759] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.105770] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.639 [2024-11-05 04:40:18.105777] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.109322] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.118501] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.119173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.119217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.119228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.119466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.119689] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.119699] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.639 [2024-11-05 04:40:18.119707] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.123261] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.132451] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.133122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.133161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.133172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.133410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.133634] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.133644] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.639 [2024-11-05 04:40:18.133651] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.137216] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.146401] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.147104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.147143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.147154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.147392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.147615] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.147625] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.639 [2024-11-05 04:40:18.147633] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.151188] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.160384] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.160864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.160903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.160916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.161160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.161383] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.161394] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.639 [2024-11-05 04:40:18.161402] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.639 [2024-11-05 04:40:18.164954] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.639 [2024-11-05 04:40:18.174346] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.639 [2024-11-05 04:40:18.174983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.639 [2024-11-05 04:40:18.175023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.639 [2024-11-05 04:40:18.175034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.639 [2024-11-05 04:40:18.175272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.639 [2024-11-05 04:40:18.175495] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.639 [2024-11-05 04:40:18.175505] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.640 [2024-11-05 04:40:18.175513] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.640 [2024-11-05 04:40:18.179068] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.640 [2024-11-05 04:40:18.188251] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.640 [2024-11-05 04:40:18.188853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.640 [2024-11-05 04:40:18.188892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.640 [2024-11-05 04:40:18.188904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.640 [2024-11-05 04:40:18.189146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.640 [2024-11-05 04:40:18.189369] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.640 [2024-11-05 04:40:18.189379] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.640 [2024-11-05 04:40:18.189387] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.640 [2024-11-05 04:40:18.192940] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.640 [2024-11-05 04:40:18.202132] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.640 [2024-11-05 04:40:18.202670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.640 [2024-11-05 04:40:18.202691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.640 [2024-11-05 04:40:18.202700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.640 [2024-11-05 04:40:18.202926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.640 [2024-11-05 04:40:18.203147] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.640 [2024-11-05 04:40:18.203156] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.640 [2024-11-05 04:40:18.203167] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.640 [2024-11-05 04:40:18.206709] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.640 [2024-11-05 04:40:18.216102] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.640 [2024-11-05 04:40:18.216663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.640 [2024-11-05 04:40:18.216679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.640 [2024-11-05 04:40:18.216687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.640 [2024-11-05 04:40:18.216913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.640 [2024-11-05 04:40:18.217133] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.640 [2024-11-05 04:40:18.217142] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.640 [2024-11-05 04:40:18.217150] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.640 [2024-11-05 04:40:18.220693] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.640 [2024-11-05 04:40:18.229905] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.640 [2024-11-05 04:40:18.230586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.640 [2024-11-05 04:40:18.230625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.640 [2024-11-05 04:40:18.230637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.640 [2024-11-05 04:40:18.230883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.640 [2024-11-05 04:40:18.231107] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.640 [2024-11-05 04:40:18.231117] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.640 [2024-11-05 04:40:18.231124] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.640 [2024-11-05 04:40:18.234674] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.640 [2024-11-05 04:40:18.243873] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.640 [2024-11-05 04:40:18.244538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.640 [2024-11-05 04:40:18.244578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.640 [2024-11-05 04:40:18.244588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.640 [2024-11-05 04:40:18.244836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.640 [2024-11-05 04:40:18.245061] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.640 [2024-11-05 04:40:18.245070] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.640 [2024-11-05 04:40:18.245078] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.640 [2024-11-05 04:40:18.248625] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.640 [2024-11-05 04:40:18.257834] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.640 [2024-11-05 04:40:18.258403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.640 [2024-11-05 04:40:18.258423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.640 [2024-11-05 04:40:18.258432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.640 [2024-11-05 04:40:18.258651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.640 [2024-11-05 04:40:18.258879] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.640 [2024-11-05 04:40:18.258889] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.640 [2024-11-05 04:40:18.258896] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.640 [2024-11-05 04:40:18.262437] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.640 [2024-11-05 04:40:18.271624] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.640 [2024-11-05 04:40:18.272182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.640 [2024-11-05 04:40:18.272199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.640 [2024-11-05 04:40:18.272207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.640 [2024-11-05 04:40:18.272427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.640 [2024-11-05 04:40:18.272646] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.640 [2024-11-05 04:40:18.272666] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.640 [2024-11-05 04:40:18.272673] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.276225] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.285419] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.285963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.285981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.285988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.903 [2024-11-05 04:40:18.286208] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.903 [2024-11-05 04:40:18.286427] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.903 [2024-11-05 04:40:18.286438] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.903 [2024-11-05 04:40:18.286445] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.289989] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.299219] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.299738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.299765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.299774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.903 [2024-11-05 04:40:18.299993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.903 [2024-11-05 04:40:18.300213] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.903 [2024-11-05 04:40:18.300221] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.903 [2024-11-05 04:40:18.300230] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.303774] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.313179] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.313741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.313764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.313771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.903 [2024-11-05 04:40:18.313990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.903 [2024-11-05 04:40:18.314209] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.903 [2024-11-05 04:40:18.314221] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.903 [2024-11-05 04:40:18.314228] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.317770] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.326987] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.327397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.327416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.327424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.903 [2024-11-05 04:40:18.327656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.903 [2024-11-05 04:40:18.327882] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.903 [2024-11-05 04:40:18.327893] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.903 [2024-11-05 04:40:18.327900] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.331442] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.340871] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.341432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.341449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.341458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.903 [2024-11-05 04:40:18.341680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.903 [2024-11-05 04:40:18.341909] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.903 [2024-11-05 04:40:18.341919] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.903 [2024-11-05 04:40:18.341927] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.345470] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.354676] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.355285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.355302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.355310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.903 [2024-11-05 04:40:18.355529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.903 [2024-11-05 04:40:18.355754] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.903 [2024-11-05 04:40:18.355764] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.903 [2024-11-05 04:40:18.355771] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.359315] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.368509] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.368964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.368981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.368988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.903 [2024-11-05 04:40:18.369207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.903 [2024-11-05 04:40:18.369426] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.903 [2024-11-05 04:40:18.369437] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.903 [2024-11-05 04:40:18.369444] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.372993] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.382393] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.383115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.383154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.383165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.903 [2024-11-05 04:40:18.383404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.903 [2024-11-05 04:40:18.383627] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.903 [2024-11-05 04:40:18.383637] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.903 [2024-11-05 04:40:18.383649] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.387201] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.396178] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.396780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.396820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.396833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.903 [2024-11-05 04:40:18.397074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.903 [2024-11-05 04:40:18.397298] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.903 [2024-11-05 04:40:18.397307] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.903 [2024-11-05 04:40:18.397315] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.903 [2024-11-05 04:40:18.400867] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.903 [2024-11-05 04:40:18.410052] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.903 [2024-11-05 04:40:18.410666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.903 [2024-11-05 04:40:18.410686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.903 [2024-11-05 04:40:18.410694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.410922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.411143] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.411153] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.411160] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.904 [2024-11-05 04:40:18.414702] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.904 [2024-11-05 04:40:18.423902] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.904 [2024-11-05 04:40:18.424540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.904 [2024-11-05 04:40:18.424579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.904 [2024-11-05 04:40:18.424589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.424838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.425062] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.425072] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.425080] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.904 [2024-11-05 04:40:18.428629] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.904 [2024-11-05 04:40:18.437846] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.904 [2024-11-05 04:40:18.438392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.904 [2024-11-05 04:40:18.438412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.904 [2024-11-05 04:40:18.438420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.438640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.438868] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.438879] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.438887] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.904 [2024-11-05 04:40:18.442430] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.904 [2024-11-05 04:40:18.451828] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.904 [2024-11-05 04:40:18.452359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.904 [2024-11-05 04:40:18.452376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.904 [2024-11-05 04:40:18.452384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.452603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.452832] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.452843] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.452850] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.904 [2024-11-05 04:40:18.456406] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.904 [2024-11-05 04:40:18.465809] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.904 [2024-11-05 04:40:18.466352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.904 [2024-11-05 04:40:18.466368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.904 [2024-11-05 04:40:18.466376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.466594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.466820] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.466830] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.466838] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.904 [2024-11-05 04:40:18.470380] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.904 5620.60 IOPS, 21.96 MiB/s [2024-11-05T03:40:18.544Z] [2024-11-05 04:40:18.481441] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.904 [2024-11-05 04:40:18.481986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.904 [2024-11-05 04:40:18.482008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.904 [2024-11-05 04:40:18.482016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.482235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.482455] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.482464] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.482471] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.904 [2024-11-05 04:40:18.486063] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.904 [2024-11-05 04:40:18.495253] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.904 [2024-11-05 04:40:18.495948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.904 [2024-11-05 04:40:18.495987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.904 [2024-11-05 04:40:18.495998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.496236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.496459] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.496468] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.496476] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.904 [2024-11-05 04:40:18.500029] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.904 [2024-11-05 04:40:18.509220] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.904 [2024-11-05 04:40:18.509898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.904 [2024-11-05 04:40:18.509937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.904 [2024-11-05 04:40:18.509948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.510187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.510410] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.510420] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.510427] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.904 [2024-11-05 04:40:18.513981] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.904 [2024-11-05 04:40:18.523170] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.904 [2024-11-05 04:40:18.523853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.904 [2024-11-05 04:40:18.523895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.904 [2024-11-05 04:40:18.523907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.524158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.524381] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.524390] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.524398] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:04.904 [2024-11-05 04:40:18.527952] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:04.904 [2024-11-05 04:40:18.537154] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:04.904 [2024-11-05 04:40:18.537866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:04.904 [2024-11-05 04:40:18.537904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:04.904 [2024-11-05 04:40:18.537915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:04.904 [2024-11-05 04:40:18.538153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:04.904 [2024-11-05 04:40:18.538376] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:04.904 [2024-11-05 04:40:18.538386] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:04.904 [2024-11-05 04:40:18.538394] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.167 [2024-11-05 04:40:18.541950] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.167 [2024-11-05 04:40:18.551140] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.167 [2024-11-05 04:40:18.551822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.167 [2024-11-05 04:40:18.551860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.167 [2024-11-05 04:40:18.551873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.167 [2024-11-05 04:40:18.552112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.167 [2024-11-05 04:40:18.552334] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.167 [2024-11-05 04:40:18.552343] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.167 [2024-11-05 04:40:18.552351] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.167 [2024-11-05 04:40:18.555918] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.167 [2024-11-05 04:40:18.565114] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.167 [2024-11-05 04:40:18.565666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.167 [2024-11-05 04:40:18.565702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.167 [2024-11-05 04:40:18.565715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.167 [2024-11-05 04:40:18.565962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.167 [2024-11-05 04:40:18.566185] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.167 [2024-11-05 04:40:18.566199] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.167 [2024-11-05 04:40:18.566207] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.167 [2024-11-05 04:40:18.569757] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.167 [2024-11-05 04:40:18.578947] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.167 [2024-11-05 04:40:18.579617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.167 [2024-11-05 04:40:18.579656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.167 [2024-11-05 04:40:18.579666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.167 [2024-11-05 04:40:18.579913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.167 [2024-11-05 04:40:18.580136] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.167 [2024-11-05 04:40:18.580145] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.167 [2024-11-05 04:40:18.580153] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.167 [2024-11-05 04:40:18.583700] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.167 [2024-11-05 04:40:18.592891] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.167 [2024-11-05 04:40:18.593438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.167 [2024-11-05 04:40:18.593457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.167 [2024-11-05 04:40:18.593465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.167 [2024-11-05 04:40:18.593684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.167 [2024-11-05 04:40:18.593909] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.167 [2024-11-05 04:40:18.593919] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.167 [2024-11-05 04:40:18.593926] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.167 [2024-11-05 04:40:18.597476] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.167 [2024-11-05 04:40:18.606874] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.167 [2024-11-05 04:40:18.607496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.167 [2024-11-05 04:40:18.607534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.167 [2024-11-05 04:40:18.607545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.167 [2024-11-05 04:40:18.607792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.167 [2024-11-05 04:40:18.608016] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.167 [2024-11-05 04:40:18.608024] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.167 [2024-11-05 04:40:18.608032] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.167 [2024-11-05 04:40:18.611582] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.167 [2024-11-05 04:40:18.620774] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.167 [2024-11-05 04:40:18.621220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.167 [2024-11-05 04:40:18.621238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.167 [2024-11-05 04:40:18.621246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.167 [2024-11-05 04:40:18.621466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.167 [2024-11-05 04:40:18.621685] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.167 [2024-11-05 04:40:18.621694] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.167 [2024-11-05 04:40:18.621702] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.167 [2024-11-05 04:40:18.625249] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.167 [2024-11-05 04:40:18.634650] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.167 [2024-11-05 04:40:18.635188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.167 [2024-11-05 04:40:18.635204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.167 [2024-11-05 04:40:18.635212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.635431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.635649] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.635658] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.635665] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.639217] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.168 [2024-11-05 04:40:18.648607] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.168 [2024-11-05 04:40:18.649126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.168 [2024-11-05 04:40:18.649164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.168 [2024-11-05 04:40:18.649174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.649412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.649635] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.649644] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.649651] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.653205] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.168 [2024-11-05 04:40:18.662402] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.168 [2024-11-05 04:40:18.663042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.168 [2024-11-05 04:40:18.663086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.168 [2024-11-05 04:40:18.663097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.663335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.663558] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.663566] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.663574] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.667129] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.168 [2024-11-05 04:40:18.676330] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.168 [2024-11-05 04:40:18.677034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.168 [2024-11-05 04:40:18.677072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.168 [2024-11-05 04:40:18.677083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.677320] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.677542] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.677551] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.677559] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.681114] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.168 [2024-11-05 04:40:18.690306] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.168 [2024-11-05 04:40:18.690902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.168 [2024-11-05 04:40:18.690922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.168 [2024-11-05 04:40:18.690930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.691149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.691367] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.691376] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.691383] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.694928] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.168 [2024-11-05 04:40:18.704111] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.168 [2024-11-05 04:40:18.704679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.168 [2024-11-05 04:40:18.704696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.168 [2024-11-05 04:40:18.704704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.704932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.705151] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.705160] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.705167] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.708703] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.168 [2024-11-05 04:40:18.718096] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.168 [2024-11-05 04:40:18.718627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.168 [2024-11-05 04:40:18.718642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.168 [2024-11-05 04:40:18.718650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.718873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.719092] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.719101] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.719108] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.722646] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.168 [2024-11-05 04:40:18.732036] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.168 [2024-11-05 04:40:18.732605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.168 [2024-11-05 04:40:18.732621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.168 [2024-11-05 04:40:18.732628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.732851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.733071] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.733079] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.733086] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.736622] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.168 [2024-11-05 04:40:18.745818] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.168 [2024-11-05 04:40:18.746434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.168 [2024-11-05 04:40:18.746472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.168 [2024-11-05 04:40:18.746483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.746721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.746952] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.746966] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.746974] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.750520] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.168 [2024-11-05 04:40:18.759726] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.168 [2024-11-05 04:40:18.760338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.168 [2024-11-05 04:40:18.760376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.168 [2024-11-05 04:40:18.760387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.168 [2024-11-05 04:40:18.760624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.168 [2024-11-05 04:40:18.760854] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.168 [2024-11-05 04:40:18.760864] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.168 [2024-11-05 04:40:18.760871] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.168 [2024-11-05 04:40:18.764419] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.169 [2024-11-05 04:40:18.773607] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.169 [2024-11-05 04:40:18.774157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.169 [2024-11-05 04:40:18.774177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.169 [2024-11-05 04:40:18.774185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.169 [2024-11-05 04:40:18.774404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.169 [2024-11-05 04:40:18.774623] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.169 [2024-11-05 04:40:18.774632] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.169 [2024-11-05 04:40:18.774639] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.169 [2024-11-05 04:40:18.778185] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.169 [2024-11-05 04:40:18.787578] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.169 [2024-11-05 04:40:18.788206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.169 [2024-11-05 04:40:18.788244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.169 [2024-11-05 04:40:18.788255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.169 [2024-11-05 04:40:18.788493] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.169 [2024-11-05 04:40:18.788715] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.169 [2024-11-05 04:40:18.788724] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.169 [2024-11-05 04:40:18.788731] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.169 [2024-11-05 04:40:18.792289] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.169 [2024-11-05 04:40:18.801480] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.169 [2024-11-05 04:40:18.802134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.169 [2024-11-05 04:40:18.802172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.169 [2024-11-05 04:40:18.802183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.169 [2024-11-05 04:40:18.802421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.169 [2024-11-05 04:40:18.802643] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.169 [2024-11-05 04:40:18.802652] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.169 [2024-11-05 04:40:18.802660] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.431 [2024-11-05 04:40:18.806212] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.431 [2024-11-05 04:40:18.815401] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.431 [2024-11-05 04:40:18.816096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-11-05 04:40:18.816134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.431 [2024-11-05 04:40:18.816144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.431 [2024-11-05 04:40:18.816382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.431 [2024-11-05 04:40:18.816604] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.431 [2024-11-05 04:40:18.816613] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.431 [2024-11-05 04:40:18.816621] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.431 [2024-11-05 04:40:18.820175] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.431 [2024-11-05 04:40:18.829367] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.431 [2024-11-05 04:40:18.829934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.431 [2024-11-05 04:40:18.829973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.431 [2024-11-05 04:40:18.829984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.431 [2024-11-05 04:40:18.830222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.431 [2024-11-05 04:40:18.830445] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.431 [2024-11-05 04:40:18.830454] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.431 [2024-11-05 04:40:18.830462] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.431 [2024-11-05 04:40:18.834014] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.431 [2024-11-05 04:40:18.843220] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.431 [2024-11-05 04:40:18.843971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.844014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.844025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.844263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.844485] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.844494] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.432 [2024-11-05 04:40:18.844501] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.432 [2024-11-05 04:40:18.848054] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.432 [2024-11-05 04:40:18.857047] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.432 [2024-11-05 04:40:18.857713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.857758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.857770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.858007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.858230] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.858239] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.432 [2024-11-05 04:40:18.858246] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.432 [2024-11-05 04:40:18.861793] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.432 [2024-11-05 04:40:18.870980] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.432 [2024-11-05 04:40:18.871428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.871448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.871456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.871676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.871901] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.871911] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.432 [2024-11-05 04:40:18.871917] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.432 [2024-11-05 04:40:18.875456] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.432 [2024-11-05 04:40:18.884858] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.432 [2024-11-05 04:40:18.885402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.885417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.885425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.885648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.885872] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.885882] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.432 [2024-11-05 04:40:18.885889] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.432 [2024-11-05 04:40:18.889429] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.432 [2024-11-05 04:40:18.898823] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.432 [2024-11-05 04:40:18.899478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.899516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.899527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.899774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.899998] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.900006] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.432 [2024-11-05 04:40:18.900014] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.432 [2024-11-05 04:40:18.903559] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.432 [2024-11-05 04:40:18.912756] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.432 [2024-11-05 04:40:18.913394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.913432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.913443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.913681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.913912] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.913922] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.432 [2024-11-05 04:40:18.913929] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.432 [2024-11-05 04:40:18.917475] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.432 [2024-11-05 04:40:18.926666] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.432 [2024-11-05 04:40:18.927228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.927265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.927276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.927514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.927737] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.927757] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.432 [2024-11-05 04:40:18.927766] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.432 [2024-11-05 04:40:18.931311] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.432 [2024-11-05 04:40:18.940510] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.432 [2024-11-05 04:40:18.940958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.940978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.940986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.941206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.941425] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.941434] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.432 [2024-11-05 04:40:18.941440] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.432 [2024-11-05 04:40:18.944988] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.432 [2024-11-05 04:40:18.954379] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.432 [2024-11-05 04:40:18.955077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.955115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.955126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.955363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.955586] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.955594] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.432 [2024-11-05 04:40:18.955602] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.432 [2024-11-05 04:40:18.959166] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.432 [2024-11-05 04:40:18.968355] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.432 [2024-11-05 04:40:18.969016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.432 [2024-11-05 04:40:18.969055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.432 [2024-11-05 04:40:18.969066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.432 [2024-11-05 04:40:18.969304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.432 [2024-11-05 04:40:18.969526] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.432 [2024-11-05 04:40:18.969535] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.433 [2024-11-05 04:40:18.969542] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.433 [2024-11-05 04:40:18.973102] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.433 [2024-11-05 04:40:18.982289] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.433 [2024-11-05 04:40:18.982768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-11-05 04:40:18.982787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.433 [2024-11-05 04:40:18.982795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.433 [2024-11-05 04:40:18.983015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.433 [2024-11-05 04:40:18.983234] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.433 [2024-11-05 04:40:18.983242] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.433 [2024-11-05 04:40:18.983249] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.433 [2024-11-05 04:40:18.986791] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.433 [2024-11-05 04:40:18.996187] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.433 [2024-11-05 04:40:18.996738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-11-05 04:40:18.996784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.433 [2024-11-05 04:40:18.996796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.433 [2024-11-05 04:40:18.997035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.433 [2024-11-05 04:40:18.997258] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.433 [2024-11-05 04:40:18.997266] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.433 [2024-11-05 04:40:18.997274] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.433 [2024-11-05 04:40:19.000821] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.433 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3170528 Killed "${NVMF_APP[@]}" "$@" 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.433 [2024-11-05 04:40:19.010005] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.433 [2024-11-05 04:40:19.010677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-11-05 04:40:19.010714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.433 [2024-11-05 04:40:19.010726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.433 [2024-11-05 04:40:19.010974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.433 [2024-11-05 04:40:19.011197] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.433 [2024-11-05 04:40:19.011210] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.433 [2024-11-05 04:40:19.011218] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.433 [2024-11-05 04:40:19.014766] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=3172231 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 3172231 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 3172231 ']' 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:05.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:05.433 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:05.433 [2024-11-05 04:40:19.023961] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.433 [2024-11-05 04:40:19.024552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-11-05 04:40:19.024589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.433 [2024-11-05 04:40:19.024603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.433 [2024-11-05 04:40:19.024851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.433 [2024-11-05 04:40:19.025076] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.433 [2024-11-05 04:40:19.025085] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.433 [2024-11-05 04:40:19.025093] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.433 [2024-11-05 04:40:19.028640] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.433 [2024-11-05 04:40:19.037843] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.433 [2024-11-05 04:40:19.038405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-11-05 04:40:19.038443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.433 [2024-11-05 04:40:19.038455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.433 [2024-11-05 04:40:19.038698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.433 [2024-11-05 04:40:19.038930] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.433 [2024-11-05 04:40:19.038939] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.433 [2024-11-05 04:40:19.038947] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.433 [2024-11-05 04:40:19.042492] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.433 [2024-11-05 04:40:19.051691] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.433 [2024-11-05 04:40:19.052359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-11-05 04:40:19.052398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.433 [2024-11-05 04:40:19.052409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.433 [2024-11-05 04:40:19.052647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.433 [2024-11-05 04:40:19.052879] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.433 [2024-11-05 04:40:19.052888] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.433 [2024-11-05 04:40:19.052896] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.433 [2024-11-05 04:40:19.056452] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.433 [2024-11-05 04:40:19.065688] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.433 [2024-11-05 04:40:19.066169] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:29:05.433 [2024-11-05 04:40:19.066214] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:05.433 [2024-11-05 04:40:19.066222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.433 [2024-11-05 04:40:19.066259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.433 [2024-11-05 04:40:19.066271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.433 [2024-11-05 04:40:19.066511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.433 [2024-11-05 04:40:19.066734] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.433 [2024-11-05 04:40:19.066743] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.433 [2024-11-05 04:40:19.066760] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.695 [2024-11-05 04:40:19.070303] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.695 [2024-11-05 04:40:19.079496] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.695 [2024-11-05 04:40:19.080217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.695 [2024-11-05 04:40:19.080254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.695 [2024-11-05 04:40:19.080265] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.695 [2024-11-05 04:40:19.080503] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.695 [2024-11-05 04:40:19.080726] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.695 [2024-11-05 04:40:19.080735] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.695 [2024-11-05 04:40:19.080743] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.696 [2024-11-05 04:40:19.084299] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.696 [2024-11-05 04:40:19.093285] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.696 [2024-11-05 04:40:19.093880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-11-05 04:40:19.093919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.696 [2024-11-05 04:40:19.093932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.696 [2024-11-05 04:40:19.094173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.696 [2024-11-05 04:40:19.094396] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.696 [2024-11-05 04:40:19.094405] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.696 [2024-11-05 04:40:19.094412] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.696 [2024-11-05 04:40:19.097966] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.696 [2024-11-05 04:40:19.107244] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.696 [2024-11-05 04:40:19.107976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-11-05 04:40:19.108014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.696 [2024-11-05 04:40:19.108025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.696 [2024-11-05 04:40:19.108263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.696 [2024-11-05 04:40:19.108486] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.696 [2024-11-05 04:40:19.108494] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.696 [2024-11-05 04:40:19.108502] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.696 [2024-11-05 04:40:19.112054] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.696 [2024-11-05 04:40:19.121244] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.696 [2024-11-05 04:40:19.121845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-11-05 04:40:19.121883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.696 [2024-11-05 04:40:19.121895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.696 [2024-11-05 04:40:19.122137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.696 [2024-11-05 04:40:19.122360] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.696 [2024-11-05 04:40:19.122368] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.696 [2024-11-05 04:40:19.122376] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.696 [2024-11-05 04:40:19.125930] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.696 [2024-11-05 04:40:19.135122] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.696 [2024-11-05 04:40:19.135687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-11-05 04:40:19.135723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.696 [2024-11-05 04:40:19.135739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.696 [2024-11-05 04:40:19.135985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.696 [2024-11-05 04:40:19.136208] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.696 [2024-11-05 04:40:19.136217] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.696 [2024-11-05 04:40:19.136225] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.696 [2024-11-05 04:40:19.139787] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.696 [2024-11-05 04:40:19.148975] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.696 [2024-11-05 04:40:19.149673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-11-05 04:40:19.149711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.696 [2024-11-05 04:40:19.149722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.696 [2024-11-05 04:40:19.149969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.696 [2024-11-05 04:40:19.150193] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.696 [2024-11-05 04:40:19.150201] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.696 [2024-11-05 04:40:19.150209] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.696 [2024-11-05 04:40:19.153758] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.696 [2024-11-05 04:40:19.158015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:05.696 [2024-11-05 04:40:19.162961] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.696 [2024-11-05 04:40:19.163652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-11-05 04:40:19.163690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.696 [2024-11-05 04:40:19.163701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.696 [2024-11-05 04:40:19.163950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.696 [2024-11-05 04:40:19.164174] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.696 [2024-11-05 04:40:19.164182] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.696 [2024-11-05 04:40:19.164190] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.696 [2024-11-05 04:40:19.167743] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.696 [2024-11-05 04:40:19.176941] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.696 [2024-11-05 04:40:19.177633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-11-05 04:40:19.177671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.696 [2024-11-05 04:40:19.177683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.696 [2024-11-05 04:40:19.177937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.696 [2024-11-05 04:40:19.178161] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.696 [2024-11-05 04:40:19.178170] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.696 [2024-11-05 04:40:19.178177] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.696 [2024-11-05 04:40:19.181722] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.696 [2024-11-05 04:40:19.187343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.696 [2024-11-05 04:40:19.187375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.696 [2024-11-05 04:40:19.187382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:05.696 [2024-11-05 04:40:19.187386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:05.696 [2024-11-05 04:40:19.187391] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.696 [2024-11-05 04:40:19.188597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:05.696 [2024-11-05 04:40:19.188756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.696 [2024-11-05 04:40:19.188768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:05.696 [2024-11-05 04:40:19.190914] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.696 [2024-11-05 04:40:19.191542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-11-05 04:40:19.191581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.696 [2024-11-05 04:40:19.191592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.696 [2024-11-05 04:40:19.191837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.696 [2024-11-05 04:40:19.192062] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.696 [2024-11-05 04:40:19.192073] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.696 [2024-11-05 04:40:19.192080] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.696 [2024-11-05 04:40:19.195629] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.696 [2024-11-05 04:40:19.204823] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.696 [2024-11-05 04:40:19.205448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.696 [2024-11-05 04:40:19.205487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.205497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.205736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.205967] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.205977] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.205985] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.697 [2024-11-05 04:40:19.209530] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.697 [2024-11-05 04:40:19.218726] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.697 [2024-11-05 04:40:19.219410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-11-05 04:40:19.219449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.219460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.219699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.219929] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.219939] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.219947] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.697 [2024-11-05 04:40:19.223493] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.697 [2024-11-05 04:40:19.232688] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.697 [2024-11-05 04:40:19.233184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-11-05 04:40:19.233204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.233212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.233432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.233651] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.233659] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.233667] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.697 [2024-11-05 04:40:19.237211] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.697 [2024-11-05 04:40:19.246626] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.697 [2024-11-05 04:40:19.247062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-11-05 04:40:19.247081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.247089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.247308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.247527] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.247536] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.247543] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.697 [2024-11-05 04:40:19.251084] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.697 [2024-11-05 04:40:19.260549] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.697 [2024-11-05 04:40:19.261068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-11-05 04:40:19.261092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.261100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.261319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.261537] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.261545] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.261553] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.697 [2024-11-05 04:40:19.265094] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.697 [2024-11-05 04:40:19.274484] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.697 [2024-11-05 04:40:19.274891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-11-05 04:40:19.274908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.274915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.275134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.275353] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.275369] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.275376] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.697 [2024-11-05 04:40:19.278919] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.697 [2024-11-05 04:40:19.288308] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.697 [2024-11-05 04:40:19.288847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-11-05 04:40:19.288885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.288898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.289139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.289361] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.289370] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.289379] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.697 [2024-11-05 04:40:19.292932] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.697 [2024-11-05 04:40:19.302136] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.697 [2024-11-05 04:40:19.302841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-11-05 04:40:19.302879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.302891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.303137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.303360] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.303368] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.303376] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.697 [2024-11-05 04:40:19.306928] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.697 [2024-11-05 04:40:19.316117] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.697 [2024-11-05 04:40:19.316768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-11-05 04:40:19.316806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.316818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.317060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.317282] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.317291] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.317299] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.697 [2024-11-05 04:40:19.320852] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.697 [2024-11-05 04:40:19.330040] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.697 [2024-11-05 04:40:19.330587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.697 [2024-11-05 04:40:19.330606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.697 [2024-11-05 04:40:19.330614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.697 [2024-11-05 04:40:19.330839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.697 [2024-11-05 04:40:19.331059] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.697 [2024-11-05 04:40:19.331067] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.697 [2024-11-05 04:40:19.331075] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.960 [2024-11-05 04:40:19.334611] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.960 [2024-11-05 04:40:19.344019] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.960 [2024-11-05 04:40:19.344610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.960 [2024-11-05 04:40:19.344626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.960 [2024-11-05 04:40:19.344634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.960 [2024-11-05 04:40:19.344857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.960 [2024-11-05 04:40:19.345077] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.960 [2024-11-05 04:40:19.345085] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.960 [2024-11-05 04:40:19.345097] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.960 [2024-11-05 04:40:19.348637] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.960 [2024-11-05 04:40:19.357833] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.960 [2024-11-05 04:40:19.358386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.960 [2024-11-05 04:40:19.358402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.960 [2024-11-05 04:40:19.358409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.960 [2024-11-05 04:40:19.358628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.960 [2024-11-05 04:40:19.358852] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.960 [2024-11-05 04:40:19.358861] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.960 [2024-11-05 04:40:19.358869] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.960 [2024-11-05 04:40:19.362405] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.960 [2024-11-05 04:40:19.371790] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.960 [2024-11-05 04:40:19.372428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.960 [2024-11-05 04:40:19.372466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.960 [2024-11-05 04:40:19.372477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.960 [2024-11-05 04:40:19.372715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.960 [2024-11-05 04:40:19.372945] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.960 [2024-11-05 04:40:19.372954] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.960 [2024-11-05 04:40:19.372962] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.960 [2024-11-05 04:40:19.376507] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.960 [2024-11-05 04:40:19.385720] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.960 [2024-11-05 04:40:19.386374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.960 [2024-11-05 04:40:19.386412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.960 [2024-11-05 04:40:19.386422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.960 [2024-11-05 04:40:19.386660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.960 [2024-11-05 04:40:19.386891] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.960 [2024-11-05 04:40:19.386900] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.960 [2024-11-05 04:40:19.386907] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.960 [2024-11-05 04:40:19.390451] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.960 [2024-11-05 04:40:19.399645] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.960 [2024-11-05 04:40:19.400260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.960 [2024-11-05 04:40:19.400299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.960 [2024-11-05 04:40:19.400309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.960 [2024-11-05 04:40:19.400548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.960 [2024-11-05 04:40:19.400779] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.960 [2024-11-05 04:40:19.400789] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.960 [2024-11-05 04:40:19.400796] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.960 [2024-11-05 04:40:19.404342] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.960 [2024-11-05 04:40:19.413531] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.960 [2024-11-05 04:40:19.414191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.960 [2024-11-05 04:40:19.414228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.960 [2024-11-05 04:40:19.414240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.960 [2024-11-05 04:40:19.414478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.960 [2024-11-05 04:40:19.414700] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.960 [2024-11-05 04:40:19.414710] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.960 [2024-11-05 04:40:19.414718] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.960 [2024-11-05 04:40:19.418268] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.960 [2024-11-05 04:40:19.427453] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.960 [2024-11-05 04:40:19.428153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.960 [2024-11-05 04:40:19.428190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.960 [2024-11-05 04:40:19.428202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.960 [2024-11-05 04:40:19.428440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.960 [2024-11-05 04:40:19.428662] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.960 [2024-11-05 04:40:19.428671] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.960 [2024-11-05 04:40:19.428678] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.960 [2024-11-05 04:40:19.432231] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.960 [2024-11-05 04:40:19.441431] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.960 [2024-11-05 04:40:19.442060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.960 [2024-11-05 04:40:19.442102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.960 [2024-11-05 04:40:19.442113] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.960 [2024-11-05 04:40:19.442351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.960 [2024-11-05 04:40:19.442574] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.960 [2024-11-05 04:40:19.442582] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.960 [2024-11-05 04:40:19.442590] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.960 [2024-11-05 04:40:19.446142] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.960 [2024-11-05 04:40:19.455327] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.960 [2024-11-05 04:40:19.456059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.456097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.456107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.456345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.456568] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.456576] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.456584] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.460143] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 [2024-11-05 04:40:19.469128] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.961 [2024-11-05 04:40:19.469821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.469859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.469870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.470108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.470330] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.470339] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.470347] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.473902] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 4683.83 IOPS, 18.30 MiB/s [2024-11-05T03:40:19.601Z] [2024-11-05 04:40:19.483917] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.961 [2024-11-05 04:40:19.484565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.484602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.484613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.484863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.485087] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.485096] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.485104] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.488648] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 [2024-11-05 04:40:19.497839] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.961 [2024-11-05 04:40:19.498388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.498407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.498415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.498634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.498858] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.498875] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.498882] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.502421] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 [2024-11-05 04:40:19.511822] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.961 [2024-11-05 04:40:19.512457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.512495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.512506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.512745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.512976] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.512985] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.512993] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.516535] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 [2024-11-05 04:40:19.525728] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.961 [2024-11-05 04:40:19.526418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.526456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.526467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.526706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.526938] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.526952] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.526960] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.530505] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 [2024-11-05 04:40:19.539707] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.961 [2024-11-05 04:40:19.540285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.540303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.540311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.540531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.540756] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.540765] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.540772] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.544312] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 [2024-11-05 04:40:19.553498] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.961 [2024-11-05 04:40:19.554122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.554159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.554171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.554408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.554631] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.554639] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.554647] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.558207] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 [2024-11-05 04:40:19.567398] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.961 [2024-11-05 04:40:19.568084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.568122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.568133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.568371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.568594] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.568602] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.568610] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.572166] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 [2024-11-05 04:40:19.581358] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.961 [2024-11-05 04:40:19.582061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.961 [2024-11-05 04:40:19.582099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.961 [2024-11-05 04:40:19.582110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.961 [2024-11-05 04:40:19.582349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.961 [2024-11-05 04:40:19.582571] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.961 [2024-11-05 04:40:19.582580] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.961 [2024-11-05 04:40:19.582587] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:05.961 [2024-11-05 04:40:19.586141] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:05.961 [2024-11-05 04:40:19.595331] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:05.962 [2024-11-05 04:40:19.596017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:05.962 [2024-11-05 04:40:19.596055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:05.962 [2024-11-05 04:40:19.596066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:05.962 [2024-11-05 04:40:19.596304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:05.962 [2024-11-05 04:40:19.596527] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:05.962 [2024-11-05 04:40:19.596535] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:05.962 [2024-11-05 04:40:19.596543] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.223 [2024-11-05 04:40:19.600100] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.223 [2024-11-05 04:40:19.609297] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.223 [2024-11-05 04:40:19.609819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.223 [2024-11-05 04:40:19.609838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.223 [2024-11-05 04:40:19.609846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.223 [2024-11-05 04:40:19.610066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.223 [2024-11-05 04:40:19.610285] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.223 [2024-11-05 04:40:19.610293] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.223 [2024-11-05 04:40:19.610301] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.223 [2024-11-05 04:40:19.613843] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.223 [2024-11-05 04:40:19.623229] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.223 [2024-11-05 04:40:19.623847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.223 [2024-11-05 04:40:19.623889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.223 [2024-11-05 04:40:19.623902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.223 [2024-11-05 04:40:19.624143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.223 [2024-11-05 04:40:19.624365] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.223 [2024-11-05 04:40:19.624374] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.223 [2024-11-05 04:40:19.624382] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.223 [2024-11-05 04:40:19.627936] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.223 [2024-11-05 04:40:19.637125] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.223 [2024-11-05 04:40:19.637773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.223 [2024-11-05 04:40:19.637812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.223 [2024-11-05 04:40:19.637823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.223 [2024-11-05 04:40:19.638061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.223 [2024-11-05 04:40:19.638283] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.223 [2024-11-05 04:40:19.638291] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.223 [2024-11-05 04:40:19.638299] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.223 [2024-11-05 04:40:19.641859] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.651046] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.651726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.651771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.651783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.652021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.652244] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.652253] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.652260] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.224 [2024-11-05 04:40:19.655805] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.665003] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.665645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.665683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.665694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.665944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.666168] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.666177] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.666184] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.224 [2024-11-05 04:40:19.669727] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.678913] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.679493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.679531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.679543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.679792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.680016] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.680024] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.680032] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.224 [2024-11-05 04:40:19.683577] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.692768] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.693369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.693406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.693417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.693655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.693887] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.693897] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.693905] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.224 [2024-11-05 04:40:19.697448] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.706634] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.707345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.707383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.707394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.707633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.707864] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.707878] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.707886] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.224 [2024-11-05 04:40:19.711431] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.720619] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.721256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.721294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.721305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.721543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.721773] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.721783] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.721790] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.224 [2024-11-05 04:40:19.725333] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.734522] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.735062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.735081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.735089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.735308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.735527] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.735535] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.735542] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.224 [2024-11-05 04:40:19.739084] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.748482] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.749088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.749126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.749137] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.749375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.749598] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.749606] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.749614] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.224 [2024-11-05 04:40:19.753170] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.762366] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.763068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.763106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.763118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.763357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.763579] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.763588] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.763595] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.224 [2024-11-05 04:40:19.767147] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.224 [2024-11-05 04:40:19.776331] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.224 [2024-11-05 04:40:19.777050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.224 [2024-11-05 04:40:19.777090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.224 [2024-11-05 04:40:19.777101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.224 [2024-11-05 04:40:19.777339] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.224 [2024-11-05 04:40:19.777562] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.224 [2024-11-05 04:40:19.777571] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.224 [2024-11-05 04:40:19.777578] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.225 [2024-11-05 04:40:19.781129] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.225 [2024-11-05 04:40:19.790317] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.225 [2024-11-05 04:40:19.790857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.225 [2024-11-05 04:40:19.790896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.225 [2024-11-05 04:40:19.790908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.225 [2024-11-05 04:40:19.791149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.225 [2024-11-05 04:40:19.791374] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.225 [2024-11-05 04:40:19.791383] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.225 [2024-11-05 04:40:19.791390] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.225 [2024-11-05 04:40:19.794942] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.225 [2024-11-05 04:40:19.804127] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.225 [2024-11-05 04:40:19.804806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.225 [2024-11-05 04:40:19.804849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.225 [2024-11-05 04:40:19.804862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.225 [2024-11-05 04:40:19.805103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.225 [2024-11-05 04:40:19.805326] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.225 [2024-11-05 04:40:19.805335] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.225 [2024-11-05 04:40:19.805343] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.225 [2024-11-05 04:40:19.808894] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.225 [2024-11-05 04:40:19.818090] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.225 [2024-11-05 04:40:19.818778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.225 [2024-11-05 04:40:19.818817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.225 [2024-11-05 04:40:19.818829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.225 [2024-11-05 04:40:19.819070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.225 [2024-11-05 04:40:19.819293] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.225 [2024-11-05 04:40:19.819302] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.225 [2024-11-05 04:40:19.819309] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.225 [2024-11-05 04:40:19.822864] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.225 [2024-11-05 04:40:19.831882] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.225 [2024-11-05 04:40:19.832568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.225 [2024-11-05 04:40:19.832606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.225 [2024-11-05 04:40:19.832617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.225 [2024-11-05 04:40:19.832862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.225 [2024-11-05 04:40:19.833085] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.225 [2024-11-05 04:40:19.833094] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.225 [2024-11-05 04:40:19.833101] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.225 [2024-11-05 04:40:19.836643] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.225 [2024-11-05 04:40:19.845844] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.225 [2024-11-05 04:40:19.846536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.225 [2024-11-05 04:40:19.846574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.225 [2024-11-05 04:40:19.846585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.225 [2024-11-05 04:40:19.846840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.225 [2024-11-05 04:40:19.847063] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.225 [2024-11-05 04:40:19.847072] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.225 [2024-11-05 04:40:19.847079] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.225 [2024-11-05 04:40:19.850623] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.225 [2024-11-05 04:40:19.859820] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.225 [2024-11-05 04:40:19.860490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.486 [2024-11-05 04:40:19.860528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.486 [2024-11-05 04:40:19.860540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.486 [2024-11-05 04:40:19.860787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.486 [2024-11-05 04:40:19.861012] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.486 [2024-11-05 04:40:19.861021] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.486 [2024-11-05 04:40:19.861028] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.486 [2024-11-05 04:40:19.864573] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.486 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:06.486 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:06.486 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.486 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.486 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.486 [2024-11-05 04:40:19.873774] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.486 [2024-11-05 04:40:19.874341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.486 [2024-11-05 04:40:19.874361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.486 [2024-11-05 04:40:19.874369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.486 [2024-11-05 04:40:19.874588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.487 [2024-11-05 04:40:19.874816] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.487 [2024-11-05 04:40:19.874826] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.487 [2024-11-05 04:40:19.874833] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.487 [2024-11-05 04:40:19.878373] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.487 [2024-11-05 04:40:19.887559] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.487 [2024-11-05 04:40:19.888113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.487 [2024-11-05 04:40:19.888130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.487 [2024-11-05 04:40:19.888143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.487 [2024-11-05 04:40:19.888362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.487 [2024-11-05 04:40:19.888581] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.487 [2024-11-05 04:40:19.888590] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.487 [2024-11-05 04:40:19.888597] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.487 [2024-11-05 04:40:19.892203] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.487 [2024-11-05 04:40:19.901393] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.487 [2024-11-05 04:40:19.902066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.487 [2024-11-05 04:40:19.902104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.487 [2024-11-05 04:40:19.902116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.487 [2024-11-05 04:40:19.902353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.487 [2024-11-05 04:40:19.902576] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.487 [2024-11-05 04:40:19.902585] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.487 [2024-11-05 04:40:19.902593] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.487 [2024-11-05 04:40:19.906143] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.487 [2024-11-05 04:40:19.912616] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.487 [2024-11-05 04:40:19.915331] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.487 [2024-11-05 04:40:19.915921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.487 [2024-11-05 04:40:19.915941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.487 [2024-11-05 04:40:19.915949] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.487 [2024-11-05 04:40:19.916168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.487 [2024-11-05 04:40:19.916387] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.487 [2024-11-05 04:40:19.916396] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.487 [2024-11-05 04:40:19.916403] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.487 [2024-11-05 04:40:19.919947] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.487 [2024-11-05 04:40:19.929124] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.487 [2024-11-05 04:40:19.929670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.487 [2024-11-05 04:40:19.929686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.487 [2024-11-05 04:40:19.929694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.487 [2024-11-05 04:40:19.929917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.487 [2024-11-05 04:40:19.930136] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.487 [2024-11-05 04:40:19.930144] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.487 [2024-11-05 04:40:19.930152] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.487 [2024-11-05 04:40:19.933685] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.487 [2024-11-05 04:40:19.943080] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.487 [2024-11-05 04:40:19.943621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.487 [2024-11-05 04:40:19.943636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.487 [2024-11-05 04:40:19.943644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.487 [2024-11-05 04:40:19.943867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.487 [2024-11-05 04:40:19.944087] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.487 [2024-11-05 04:40:19.944095] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.487 [2024-11-05 04:40:19.944103] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.487 Malloc0 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.487 [2024-11-05 04:40:19.947636] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.487 [2024-11-05 04:40:19.957065] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.487 [2024-11-05 04:40:19.957569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.487 [2024-11-05 04:40:19.957584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.487 [2024-11-05 04:40:19.957592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.487 [2024-11-05 04:40:19.957816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.487 [2024-11-05 04:40:19.958035] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.487 [2024-11-05 04:40:19.958048] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.487 [2024-11-05 04:40:19.958055] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.487 [2024-11-05 04:40:19.961603] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.487 [2024-11-05 04:40:19.971020] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:06.487 [2024-11-05 04:40:19.971651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:06.487 [2024-11-05 04:40:19.971689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x200d000 with addr=10.0.0.2, port=4420 00:29:06.487 [2024-11-05 04:40:19.971701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x200d000 is same with the state(6) to be set 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.487 [2024-11-05 04:40:19.971951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x200d000 (9): Bad file descriptor 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:06.487 [2024-11-05 04:40:19.972174] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:06.487 [2024-11-05 04:40:19.972183] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:06.487 [2024-11-05 04:40:19.972191] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:06.487 [2024-11-05 04:40:19.975734] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:06.487 [2024-11-05 04:40:19.978287] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.487 04:40:19 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3170916 00:29:06.487 [2024-11-05 04:40:19.984926] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:06.488 [2024-11-05 04:40:20.018756] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:08.003 4775.43 IOPS, 18.65 MiB/s [2024-11-05T03:40:22.585Z] 5571.00 IOPS, 21.76 MiB/s [2024-11-05T03:40:23.526Z] 6188.67 IOPS, 24.17 MiB/s [2024-11-05T03:40:24.912Z] 6697.30 IOPS, 26.16 MiB/s [2024-11-05T03:40:25.855Z] 7100.00 IOPS, 27.73 MiB/s [2024-11-05T03:40:26.797Z] 7435.75 IOPS, 29.05 MiB/s [2024-11-05T03:40:27.739Z] 7727.77 IOPS, 30.19 MiB/s [2024-11-05T03:40:28.679Z] 7986.79 IOPS, 31.20 MiB/s 00:29:15.039 Latency(us) 00:29:15.039 [2024-11-05T03:40:28.679Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.039 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:15.039 Verification LBA range: start 0x0 length 0x4000 00:29:15.039 Nvme1n1 : 15.01 8201.44 32.04 9776.03 0.00 7094.60 788.48 15510.19 00:29:15.039 [2024-11-05T03:40:28.679Z] =================================================================================================================== 00:29:15.039 [2024-11-05T03:40:28.679Z] Total : 8201.44 32.04 9776.03 0.00 7094.60 788.48 15510.19 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:15.039 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:15.039 rmmod nvme_tcp 00:29:15.039 rmmod nvme_fabrics 00:29:15.039 rmmod nvme_keyring 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 3172231 ']' 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 3172231 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 3172231 ']' 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 3172231 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3172231 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3172231' 00:29:15.299 killing process with pid 3172231 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 3172231 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 3172231 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:15.299 04:40:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.841 04:40:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:17.841 00:29:17.841 real 0m28.149s 00:29:17.841 user 1m3.438s 00:29:17.841 sys 0m7.437s 00:29:17.841 04:40:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:17.841 04:40:30 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:17.841 ************************************ 00:29:17.841 END TEST nvmf_bdevperf 00:29:17.841 ************************************ 00:29:17.841 04:40:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:17.841 ************************************ 00:29:17.841 START TEST nvmf_target_disconnect 00:29:17.841 ************************************ 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:17.841 * Looking for test storage... 00:29:17.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:17.841 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:17.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.842 --rc genhtml_branch_coverage=1 00:29:17.842 --rc genhtml_function_coverage=1 00:29:17.842 --rc genhtml_legend=1 00:29:17.842 --rc geninfo_all_blocks=1 00:29:17.842 --rc geninfo_unexecuted_blocks=1 00:29:17.842 00:29:17.842 ' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:17.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.842 --rc genhtml_branch_coverage=1 00:29:17.842 --rc genhtml_function_coverage=1 00:29:17.842 --rc genhtml_legend=1 00:29:17.842 --rc geninfo_all_blocks=1 00:29:17.842 --rc geninfo_unexecuted_blocks=1 00:29:17.842 00:29:17.842 ' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:17.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.842 --rc genhtml_branch_coverage=1 00:29:17.842 --rc genhtml_function_coverage=1 00:29:17.842 --rc genhtml_legend=1 00:29:17.842 --rc geninfo_all_blocks=1 00:29:17.842 --rc geninfo_unexecuted_blocks=1 00:29:17.842 00:29:17.842 ' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:17.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:17.842 --rc genhtml_branch_coverage=1 00:29:17.842 --rc genhtml_function_coverage=1 00:29:17.842 --rc genhtml_legend=1 00:29:17.842 --rc geninfo_all_blocks=1 00:29:17.842 --rc geninfo_unexecuted_blocks=1 00:29:17.842 00:29:17.842 ' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:17.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:17.842 04:40:31 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:24.428 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:24.429 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:24.429 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:24.429 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:24.690 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:24.690 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:24.690 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:24.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:24.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:29:24.951 00:29:24.951 --- 10.0.0.2 ping statistics --- 00:29:24.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.951 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:24.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:24.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:29:24.951 00:29:24.951 --- 10.0.0.1 ping statistics --- 00:29:24.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:24.951 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:24.951 ************************************ 00:29:24.951 START TEST nvmf_target_disconnect_tc1 00:29:24.951 ************************************ 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:24.951 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:24.952 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:24.952 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:24.952 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:24.952 [2024-11-05 04:40:38.581560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:24.952 [2024-11-05 04:40:38.581623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xffbad0 with addr=10.0.0.2, port=4420 00:29:24.952 [2024-11-05 04:40:38.581652] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:24.952 [2024-11-05 04:40:38.581665] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:24.952 [2024-11-05 04:40:38.581673] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:24.952 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:24.952 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:24.952 Initializing NVMe Controllers 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:25.213 00:29:25.213 real 0m0.125s 00:29:25.213 user 0m0.058s 00:29:25.213 sys 0m0.068s 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:25.213 ************************************ 00:29:25.213 END TEST nvmf_target_disconnect_tc1 00:29:25.213 ************************************ 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:25.213 ************************************ 00:29:25.213 START TEST nvmf_target_disconnect_tc2 00:29:25.213 ************************************ 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3178256 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3178256 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3178256 ']' 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:25.213 04:40:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:25.213 [2024-11-05 04:40:38.733549] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:29:25.213 [2024-11-05 04:40:38.733611] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.213 [2024-11-05 04:40:38.831384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:25.474 [2024-11-05 04:40:38.883018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.474 [2024-11-05 04:40:38.883072] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.474 [2024-11-05 04:40:38.883081] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.474 [2024-11-05 04:40:38.883088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.474 [2024-11-05 04:40:38.883094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.474 [2024-11-05 04:40:38.885220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:25.474 [2024-11-05 04:40:38.885378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:25.474 [2024-11-05 04:40:38.885506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:25.474 [2024-11-05 04:40:38.885506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.045 Malloc0 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.045 [2024-11-05 04:40:39.651238] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.045 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.306 [2024-11-05 04:40:39.691609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3178308 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:26.306 04:40:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:28.224 04:40:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3178256 00:29:28.224 04:40:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Write completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Write completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Write completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Write completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Write completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Write completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 Read completed with error (sct=0, sc=8) 00:29:28.224 starting I/O failed 00:29:28.224 [2024-11-05 04:40:41.725096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:28.224 [2024-11-05 04:40:41.725442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.224 [2024-11-05 04:40:41.725463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.224 qpair failed and we were unable to recover it. 00:29:28.224 [2024-11-05 04:40:41.726024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.224 [2024-11-05 04:40:41.726053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.224 qpair failed and we were unable to recover it. 00:29:28.224 [2024-11-05 04:40:41.726387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.224 [2024-11-05 04:40:41.726398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.224 qpair failed and we were unable to recover it. 00:29:28.224 [2024-11-05 04:40:41.726696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.224 [2024-11-05 04:40:41.726705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.224 qpair failed and we were unable to recover it. 00:29:28.224 [2024-11-05 04:40:41.727122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.224 [2024-11-05 04:40:41.727152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.224 qpair failed and we were unable to recover it. 00:29:28.224 [2024-11-05 04:40:41.727491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.224 [2024-11-05 04:40:41.727502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.224 qpair failed and we were unable to recover it. 00:29:28.224 [2024-11-05 04:40:41.727978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.224 [2024-11-05 04:40:41.728008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.224 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.728340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.728352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.728634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.728643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.728945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.728954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.729285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.729295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.729632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.729641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.729957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.729967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.730257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.730266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.730590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.730599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.730918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.730927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.731242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.731252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.731542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.731550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.731883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.731892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.732236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.732245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.732525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.732533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.732831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.732845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.733143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.733152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.733438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.733447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.733782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.733792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.734130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.734139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.734463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.734472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.734799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.734809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.735118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.735127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.735423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.735432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.735759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.735768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.736108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.736117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.736448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.736457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.736728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.736738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.737162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.225 [2024-11-05 04:40:41.737171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.225 qpair failed and we were unable to recover it. 00:29:28.225 [2024-11-05 04:40:41.737509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.737518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.737833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.737842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.738149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.738158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.738467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.738475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.738805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.738814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.739131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.739139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.739425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.739434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.739606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.739615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.739928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.739937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.740139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.740147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.740471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.740478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.740652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.740660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.740945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.740954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.741240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.741248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.741535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.741543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.741875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.741883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.742176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.742183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.742513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.742521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.742800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.742808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.743019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.743028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.743345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.743353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.743681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.743690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.744027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.744036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.744326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.744334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.744636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.744645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.745006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.745014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.745298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.745307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.745597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.745605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.745916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.745924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.746212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.746221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.746502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.746511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.746863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.746871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.747210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.226 [2024-11-05 04:40:41.747219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.226 qpair failed and we were unable to recover it. 00:29:28.226 [2024-11-05 04:40:41.747508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.747516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.747795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.747804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.748149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.748158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.748428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.748436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.748719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.748727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.749047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.749056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.749397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.749405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.749728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.749736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.750058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.750066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.750363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.750371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.750660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.750669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.750971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.750980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.751263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.751271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.751553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.751561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.751860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.751867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.752152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.752160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.752438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.752446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.752725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.752734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.753043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.753052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.753346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.753355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.753637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.753648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.753931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.753940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.754274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.754283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.754565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.754572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.754876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.754885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.755069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.755078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.755396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.755404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.755684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.755692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.756002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.756011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.756282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.756290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.756598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.756607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.756938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.756947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.757245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.757253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.757530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.757538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.757724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.757732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.758053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.758062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.227 [2024-11-05 04:40:41.758390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.227 [2024-11-05 04:40:41.758399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.227 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.758691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.758700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.758979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.758988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.759283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.759292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.759581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.759589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.759967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.759975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.760296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.760305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.760657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.760665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.760960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.760968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.761258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.761267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.761555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.761563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.761704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.761712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.761891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.761900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.762202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.762210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.762487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.762495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.762796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.762804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.763137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.763147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.763316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.763325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.763529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.763536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.763729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.763736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.764087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.764095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.764419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.764428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.764762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.764771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.764956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.764965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.765241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.765252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.765368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.765376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.765665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.765674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.765955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.765963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.766244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.766252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.766595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.766603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.766887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.766895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.767102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.767111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.767411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.767420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.228 qpair failed and we were unable to recover it. 00:29:28.228 [2024-11-05 04:40:41.767609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.228 [2024-11-05 04:40:41.767618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.767795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.767804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.768133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.768141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.768469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.768478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.768778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.768786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.769098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.769106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.769392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.769401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.769704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.769713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.770007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.770015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.770323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.770332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.770517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.770525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.770821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.770829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.771173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.771181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.771480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.771489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.771788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.771797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.772106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.772114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.772312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.772319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.772593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.772601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.772890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.772899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.773190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.773198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.773493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.773502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.773820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.773828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.229 [2024-11-05 04:40:41.774138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.229 [2024-11-05 04:40:41.774145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.229 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.774436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.774444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.774744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.774757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.775101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.775109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.775409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.775418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.775594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.775602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.775914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.775922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.776214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.776222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.776521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.776530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.776830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.776840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.777014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.777023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.777342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.777351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.777655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.777663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.777846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.777854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.778166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.778174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.778466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.778474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.778789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.778797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.779088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.779096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.779386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.779394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.779691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.779699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.779993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.780001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.780321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.780328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.780632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.780640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.780999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.781008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.781310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.781318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.781634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.781642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.781919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.781927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.782218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.782226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.782531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.782539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.782825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.782833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.783152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.783160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.783470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.783479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.783808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.783817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.784121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.784130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.784422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.784430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.784755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.784764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.785076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.230 [2024-11-05 04:40:41.785086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.230 qpair failed and we were unable to recover it. 00:29:28.230 [2024-11-05 04:40:41.785405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.785413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.785688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.785696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.786009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.786017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.786319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.786328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.786626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.786635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.786919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.786927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.787262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.787270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.787557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.787564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.787873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.787881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.788188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.788197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.788480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.788488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.788811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.788819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.789120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.789130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.789500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.789508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.789823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.789831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.790138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.790146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.790455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.790464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.790770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.790778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.791082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.791090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.791383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.791392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.791694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.791703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.791958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.791966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.792253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.792262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.792560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.792569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.792866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.792874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.793181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.793190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.793493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.793501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.793807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.793815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.794116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.794124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.794434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.794443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.794754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.794763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.795053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.795061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.795365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.795374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.795678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.795687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.795981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.795990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.796292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.796300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.796606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.796613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.796905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.796913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.231 [2024-11-05 04:40:41.797227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.231 [2024-11-05 04:40:41.797235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.231 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.797544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.797552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.797843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.797852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.798154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.798162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.798438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.798446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.798773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.798783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.799099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.799106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.799410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.799418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.799729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.799737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.800051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.800061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.800364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.800373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.800670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.800679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.800983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.800991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.801296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.801305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.801584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.801595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.801915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.801923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.802087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.802095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.802460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.802468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.802797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.802807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.803118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.803126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.803470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.803478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.803806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.803814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.804126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.804134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.804457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.804465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.804768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.804776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.805085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.805096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.805379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.805387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.805695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.805703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.806010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.806019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.806317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.806325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.806641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.806648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.806816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.806825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.807084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.807093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.807414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.807422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.807725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.807735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.807936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.807945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.808276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.808284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.808648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.808657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.808960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.232 [2024-11-05 04:40:41.808969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.232 qpair failed and we were unable to recover it. 00:29:28.232 [2024-11-05 04:40:41.809284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.809293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.809604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.809612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.809905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.809913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.810227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.810236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.810532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.810541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.810829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.810837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.811151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.811158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.811461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.811469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.811755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.811763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.812036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.812044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.812359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.812367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.812690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.812699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.813018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.813027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.813329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.813338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.813664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.813672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.813985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.813994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.814301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.814309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.814631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.814640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.814955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.814963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.815261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.815270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.815573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.815581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.815898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.815906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.816213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.816221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.816509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.816517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.816814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.816822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.817147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.817156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.817441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.817450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.817759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.817768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.818064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.818072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.818359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.818367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.818669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.818677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.818971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.818979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.819265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.819274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.819578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.819587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.819899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.819907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.820221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.820230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.820538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.820545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.820874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.233 [2024-11-05 04:40:41.820881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.233 qpair failed and we were unable to recover it. 00:29:28.233 [2024-11-05 04:40:41.821180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.821187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.821487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.821495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.821776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.821783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.822120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.822127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.822433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.822440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.822750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.822757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.823048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.823055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.823358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.823365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.823534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.823542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.823771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.823778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.824085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.824091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.824408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.824415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.824749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.824756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.825030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.825037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.825351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.825358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.825668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.825675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.825984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.825992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.826299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.826307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.826593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.826600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.826911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.826918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.827279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.827286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.827484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.827491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.827683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.827690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.827972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.827979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.828286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.828293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.828599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.828606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.828955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.828962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.829167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.829174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.829552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.829558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.829911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.829918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.830054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.830062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.830365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.830373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.830655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.234 [2024-11-05 04:40:41.830662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.234 qpair failed and we were unable to recover it. 00:29:28.234 [2024-11-05 04:40:41.830844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.830852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.831204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.831212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.831528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.831535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.831848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.831856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.832165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.832172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.832479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.832486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.832793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.832800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.833110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.833118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.833464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.833471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.833643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.833650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.833965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.833972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.834277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.834284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.834582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.834589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.834804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.834811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.835134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.835141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.835411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.835418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.835600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.835607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.835792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.835800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.836091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.836098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.836440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.836447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.836753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.836761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.837051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.837058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.837422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.837429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.837757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.837765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.838049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.838058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.838300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.838307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.838654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.838661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.838990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.838997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.839312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.839319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.839722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.839730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.839961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.839968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.840270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.840277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.840488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.840495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.840823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.840831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.841162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.841170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.841480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.841487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.841677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.841685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.841974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.841981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.235 [2024-11-05 04:40:41.842300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.235 [2024-11-05 04:40:41.842307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.235 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.842648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.842656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.842947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.842954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.843360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.843367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.843640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.843647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.843964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.843971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.844289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.844296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.844507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.844514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.844730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.844736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.845040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.845048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.845243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.845250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.845453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.845460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.845742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.845752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.846059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.846066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.846390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.846397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.846595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.846602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.846797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.846804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.847140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.847147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.847465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.847472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.847785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.847792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.847971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.847979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.848386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.848393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.848751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.848758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.848979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.848987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.849390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.849397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.849726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.849733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.850080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.850090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.850270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.850278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.850579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.850586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.850768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.850776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.851118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.851125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.851449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.851457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.851775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.851783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.851893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.851900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.852192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.852199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.852521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.852529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.852843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.852850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.853183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.853190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.236 qpair failed and we were unable to recover it. 00:29:28.236 [2024-11-05 04:40:41.853352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.236 [2024-11-05 04:40:41.853360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.237 qpair failed and we were unable to recover it. 00:29:28.237 [2024-11-05 04:40:41.853674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-11-05 04:40:41.853682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.237 qpair failed and we were unable to recover it. 00:29:28.237 [2024-11-05 04:40:41.853980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-11-05 04:40:41.853988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.237 qpair failed and we were unable to recover it. 00:29:28.237 [2024-11-05 04:40:41.854314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-11-05 04:40:41.854321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.237 qpair failed and we were unable to recover it. 00:29:28.237 [2024-11-05 04:40:41.854664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-11-05 04:40:41.854671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.237 qpair failed and we were unable to recover it. 00:29:28.237 [2024-11-05 04:40:41.854973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-11-05 04:40:41.854981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.237 qpair failed and we were unable to recover it. 00:29:28.237 [2024-11-05 04:40:41.855288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-11-05 04:40:41.855295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.237 qpair failed and we were unable to recover it. 00:29:28.237 [2024-11-05 04:40:41.855609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.237 [2024-11-05 04:40:41.855616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.237 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.855938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.855947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.856258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.856266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.856627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.856634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.856914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.856921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.857224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.857232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.857535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.857542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.857879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.857886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.858218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.858225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.858546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.858553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.858891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.858902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.859246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.859253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.859557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.859564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.859864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.859872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.860046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.860054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.860259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.860267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.860605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.501 [2024-11-05 04:40:41.860612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.501 qpair failed and we were unable to recover it. 00:29:28.501 [2024-11-05 04:40:41.860890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:41.860898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:41.861092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:41.861099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:41.861281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:41.861288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:41.861588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:41.861595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:41.861915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.063415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.063760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.063774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.064127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.064137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.064471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.064482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.064948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.065000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.065230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.065245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.065480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.065493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.065998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.066051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.066410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.066424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.066763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.066776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.067142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.067153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.067500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.067512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.067768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.067780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.068128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.068139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.068374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.068386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.068729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.068740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.069021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.069033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.069366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.069378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.069615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.069627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.069808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.069820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.070048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.070059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.070193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.070203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.070523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.070533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.070864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.070876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.071080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.071090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.071437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.071448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.071655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.071665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.071986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.071996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.072191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.072201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.072558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.072568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.072898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.072908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.073230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.073240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.073483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.073494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.073781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.502 [2024-11-05 04:40:42.073791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.502 qpair failed and we were unable to recover it. 00:29:28.502 [2024-11-05 04:40:42.074151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.074161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.074349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.074358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.074701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.074711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.075092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.075102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.075427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.075437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.075675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.075687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.075996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.076010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.076346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.076356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.076575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.076585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.076804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.076814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.077148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.077159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.077493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.077503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.077705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.077716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.077885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.077896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.078115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.078125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.078335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.078345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.078440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.078450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.078758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.078769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.078955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.078964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.079312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.079322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.079676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.079687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.080019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.080030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.080341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.080351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.080566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.080577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.080891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.080902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.081141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.081151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.081379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.081392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.081757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.081769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.082083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.082094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.082302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.082313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.082497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.082507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.082732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.082744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.083052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.083064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.083257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.083268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.083417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.083427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.083762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.083774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.084061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.084072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.503 [2024-11-05 04:40:42.084420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.503 [2024-11-05 04:40:42.084430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.503 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.084648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.084659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.085044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.085054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.085385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.085396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.085758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.085771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.086116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.086126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.086321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.086332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.086568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.086577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.086887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.086898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.087225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.087239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.087588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.087599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.087929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.087941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.088305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.088316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.088635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.088645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.088983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.088995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.089310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.089321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.089708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.089719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.090044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.090055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.090400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.090411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.090755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.090768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.091106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.091117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.091465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.091476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.091826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.091838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.092185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.092199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.092546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.092558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.092910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.092921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.093254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.093265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.093551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.093561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.093789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.093799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.094095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.094105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.094462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.094474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.094803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.094816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.095139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.095150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.095473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.095483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.095814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.095824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.096060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.096070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.096403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.096414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.096760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.096773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.097121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.504 [2024-11-05 04:40:42.097133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.504 qpair failed and we were unable to recover it. 00:29:28.504 [2024-11-05 04:40:42.097529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.097540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.097880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.097891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.098210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.098220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.098541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.098552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.098866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.098877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.099229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.099239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.099560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.099570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.099896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.099910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.100272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.100284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.100607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.100618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.100966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.100979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.101300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.101310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.101654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.101665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.102014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.102026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.102668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.102688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.102915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.102926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.103265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.103276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.103602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.103614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.103980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.103992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.104335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.104347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.104674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.104685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.105013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.105025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.105332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.105343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.105670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.105681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.106084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.106094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.106424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.106435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.106773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.106784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.107117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.107128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.107460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.107471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.107818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.107829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.108148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.108160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.108490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.108501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.108824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.108835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.109204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.109215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.109567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.109578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.109917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.109927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.110251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.110261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.110606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.505 [2024-11-05 04:40:42.110617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.505 qpair failed and we were unable to recover it. 00:29:28.505 [2024-11-05 04:40:42.110928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.110938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.111209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.111221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.111538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.111549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.111792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.111803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.112139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.112149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.112490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.112500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.112814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.112826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.113178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.113188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.113373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.113383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.113655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.113666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.113967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.113979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.114298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.114308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.114618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.114629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.114984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.114996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.115344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.115355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.115666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.115679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.115995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.116006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.116331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.116342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.116682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.116693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.117038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.117050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.117357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.117368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.117678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.117689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.117977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.117988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.118312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.118326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.118643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.118653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.118941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.118953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.119278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.119290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.119601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.119613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.119972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.119983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.120308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.120320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.120659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.120671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.120990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.121003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.506 [2024-11-05 04:40:42.121309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.506 [2024-11-05 04:40:42.121321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.506 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.121660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.121670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.121995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.122007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.122350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.122362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.122710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.122722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.123039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.123052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.123382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.123394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.123669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.123682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.124026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.124038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.124395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.124407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.124722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.124734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.125078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.125090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.125449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.125460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.125789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.125801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.126023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.126034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.126362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.126374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.126724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.126735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.127072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.127085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.127428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.127440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.127792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.127809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.128106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.128116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.128439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.128450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.128774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.128785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.129128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.129140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.129481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.129493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.129833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.129845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.130731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.130768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.131110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.131123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.131440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.131451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.131773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.131783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.132092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.132103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.132411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.132423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.132773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.132786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.133116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.133128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.133481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.133492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.133808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.133820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.134170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.134181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.134498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.134509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.134828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.507 [2024-11-05 04:40:42.134840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.507 qpair failed and we were unable to recover it. 00:29:28.507 [2024-11-05 04:40:42.135159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.508 [2024-11-05 04:40:42.135169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.508 qpair failed and we were unable to recover it. 00:29:28.508 [2024-11-05 04:40:42.135354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.508 [2024-11-05 04:40:42.135364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.508 qpair failed and we were unable to recover it. 00:29:28.508 [2024-11-05 04:40:42.135684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.508 [2024-11-05 04:40:42.135693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.508 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.136014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.136030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.136377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.136388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.136732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.136743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.137066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.137078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.137407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.137418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.137811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.137825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.138178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.138188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.138505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.138516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.138837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.138847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.139168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.139179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.139537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.139547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.139865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.139876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.140201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.140212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.140533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.140543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.140821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.140831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.141152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.141161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.141510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.141519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.141821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.141830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.142154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.142164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.142515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.142527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.142855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.781 [2024-11-05 04:40:42.142866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.781 qpair failed and we were unable to recover it. 00:29:28.781 [2024-11-05 04:40:42.143270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.143282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.143578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.143589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.143906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.143917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.144232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.144244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.144563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.144573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.144932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.144944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.145263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.145273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.145596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.145607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.145949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.145959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.146260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.146270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.146501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.146511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.146830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.146840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.147186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.147196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.147423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.147432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.147755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.147765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.148127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.148137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.148455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.148464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.148767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.148777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.148937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.148947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.149272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.149282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.149603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.149613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.149924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.149934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.150316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.150326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.150638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.150647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.150953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.150965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.151271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.151281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.151587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.151598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.151908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.151918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.152229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.152239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.152552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.152562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.152870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.152880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.153209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.153219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.153540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.153552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.153860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.153870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.154219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.154229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.154535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.154545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.154900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.154910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.782 [2024-11-05 04:40:42.155075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.782 [2024-11-05 04:40:42.155084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.782 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.155395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.155405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.155714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.155724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.156045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.156055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.156251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.156261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.156486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.156495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.156791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.156801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.157104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.157113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.157459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.157468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.157655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.157665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.157905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.157916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.158244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.158254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.158575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.158585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.158900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.158910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.159227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.159237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.159587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.159597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.159959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.159969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.160309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.160318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.160629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.160639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.160933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.160944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.161269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.161279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.161586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.161597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.161921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.161933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.162130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.162143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.162488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.162498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.162836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.162846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.163164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.163174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.163486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.163498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.163883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.163892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.164227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.164237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.164452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.164461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.164795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.164805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.165134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.165144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.165499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.165510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.165820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.165830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.166171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.166183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.166503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.166513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.166885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.166897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.167205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.783 [2024-11-05 04:40:42.167215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.783 qpair failed and we were unable to recover it. 00:29:28.783 [2024-11-05 04:40:42.167525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.167534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.167843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.167853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.168152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.168161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.168444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.168454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.168770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.168782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.169103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.169112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.169472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.169483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.169862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.169872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.170205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.170216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.170567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.170577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.170894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.170904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.171233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.171243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.171567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.171578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.171888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.171898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.172234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.172243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.172556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.172566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.172850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.172860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.173228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.173238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.173576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.173586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.173906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.173916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.174098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.174106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.174433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.174442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.174738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.174763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.175093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.175102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.175411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.175421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.175732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.175741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.176050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.176060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.176375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.176384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.176710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.176723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.177052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.177062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.177376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.177386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.177699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.177709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.177985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.177995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.178311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.178321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.178622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.178632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.178928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.178939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.179257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.179268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.179576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.179587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.784 qpair failed and we were unable to recover it. 00:29:28.784 [2024-11-05 04:40:42.179922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.784 [2024-11-05 04:40:42.179932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.180242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.180252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.180561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.180571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.180887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.180896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.181215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.181224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.181537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.181546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.181895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.181905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.182212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.182221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.182523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.182534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.182848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.182859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.183178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.183187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.183497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.183507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.183806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.183818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.184121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.184131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.184442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.184451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.184759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.184768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.185081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.185091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.185401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.185410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.185726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.185736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.186031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.186041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.186333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.186342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.186659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.186668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.187005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.187017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.187286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.187296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.187595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.187604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.187925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.187935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.188256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.188265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.188596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.188605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.188929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.188939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.189258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.189267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.189583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.189594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.189911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.189921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.190272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.190281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.785 [2024-11-05 04:40:42.190591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.785 [2024-11-05 04:40:42.190600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.785 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.190911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.190921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.191263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.191273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.191571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.191580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.191891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.191901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.192210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.192220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.192533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.192543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.192830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.192840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.193173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.193184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.193491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.193501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.193809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.193819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.194118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.194127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.194435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.194444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.194764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.194775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.195086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.195095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.195401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.195410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.195718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.195727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.196057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.196067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.196380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.196389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.196725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.196735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.197059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.197070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.197375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.197384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.197694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.197703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.198012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.198022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.198341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.198350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.198632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.198641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.198926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.198936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.199243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.199252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.199448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.199458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.199771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.199780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.200088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.200097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.200393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.200402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.200714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.200723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.201014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.201023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.201355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.201365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.201650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.201659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.201966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.201976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.202284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.786 [2024-11-05 04:40:42.202296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.786 qpair failed and we were unable to recover it. 00:29:28.786 [2024-11-05 04:40:42.202600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.202609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.202943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.202952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.203277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.203287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.203582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.203592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.203910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.203922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.204218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.204229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.204536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.204547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.204823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.204835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.205179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.205190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.205495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.205505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.205817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.205827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.206139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.206148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.206469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.206479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.206781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.206790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.207107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.207117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.207432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.207441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.207732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.207741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.208051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.208060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.208448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.208458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.208765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.208774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.209053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.209063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.209359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.209369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.209683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.209693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.210023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.210032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.210353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.210364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.210663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.210673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.210998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.211008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.211360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.211372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.211736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.211750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.212095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.212106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.212419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.212429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.212783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.212792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.213118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.213127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.213442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.213452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.213812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.213823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.214008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.214017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.214219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.214229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.214547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.787 [2024-11-05 04:40:42.214558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.787 qpair failed and we were unable to recover it. 00:29:28.787 [2024-11-05 04:40:42.214902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.214912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.215215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.215226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.215514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.215523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.215862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.215871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.216177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.216187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.216504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.216514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.216790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.216801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.217127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.217137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.217444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.217453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.217763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.217773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.218010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.218019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.218359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.218368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.218677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.218687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.218862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.218872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.219199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.219209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.219502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.219512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.219816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.219825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.220017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.220025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.220215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.220224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.220569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.220578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.220768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.220776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.221081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.221091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.221246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.221256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.221553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.221562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.221878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.221888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.222246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.222258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.222555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.222564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.222885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.222894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.223226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.223235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.223548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.223558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.223912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.223922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.224252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.224261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.224574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.224583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.224764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.224773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.225063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.225074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.225367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.225377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.225685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.225694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.225987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.788 [2024-11-05 04:40:42.225997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.788 qpair failed and we were unable to recover it. 00:29:28.788 [2024-11-05 04:40:42.226306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.226315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.226623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.226633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.226943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.226953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.227262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.227273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.227580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.227590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.227937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.227947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.228252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.228261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.228577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.228587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.228896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.228905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.229193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.229202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.229526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.229535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.229924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.229934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.230243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.230252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.230591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.230601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.230914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.230924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.231248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.231258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.231567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.231577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.231876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.231885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.232194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.232203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.232521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.232532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.232817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.232828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.233123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.233133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.233433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.233442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.233752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.233762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.234070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.234079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.234377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.234386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.234694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.234703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.235012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.235022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.235332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.235342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.235672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.235682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.235991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.236001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.236316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.236325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.236633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.236643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.236962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.236972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.237282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.237291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.237598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.237607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.237915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.237925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.238221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.238231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.789 [2024-11-05 04:40:42.238522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.789 [2024-11-05 04:40:42.238533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.789 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.238811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.238822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.239139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.239149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.239463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.239472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.239783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.239793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.240109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.240120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.240426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.240435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.240741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.240754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.241032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.241042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.241353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.241362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.241667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.241676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.241977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.241987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.242266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.242275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.242616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.242626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.242959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.242969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.243298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.243308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.243618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.243628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.243957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.243966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.244230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.244239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.244399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.244406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.244724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.244734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.245068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.245078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.245403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.245412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.245712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.245722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.245943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.245953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.246277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.246286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.246595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.246604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.246895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.246906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.247217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.247226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.247536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.247545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.247833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.247842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.248166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.248177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.248478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.248489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.248806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.248817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.790 qpair failed and we were unable to recover it. 00:29:28.790 [2024-11-05 04:40:42.248974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.790 [2024-11-05 04:40:42.248984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.249272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.249282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.249585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.249595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.249907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.249917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.250224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.250234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.250601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.250611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.250916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.250926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.251239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.251250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.251573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.251582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.251891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.251900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.252218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.252227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.252537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.252547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.252818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.252828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.253121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.253130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.253439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.253448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.253755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.253764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.253928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.253939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.254257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.254267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.254574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.254583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.254891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.254900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.255209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.255219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.255550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.255559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.255864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.255873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.256182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.256193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.256496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.256507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.256703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.256714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.256961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.256971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.257268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.257277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.257593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.257603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.257920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.257931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.258255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.258265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.258574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.258583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.258888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.258898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.259204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.259212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.259453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.259462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.259791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.259801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.260105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.260115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.260440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.791 [2024-11-05 04:40:42.260449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.791 qpair failed and we were unable to recover it. 00:29:28.791 [2024-11-05 04:40:42.260761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.260770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.261084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.261094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.261406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.261415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.261634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.261644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.261969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.261979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.262289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.262299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.262606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.262615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.262922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.262931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.263130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.263139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.263449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.263458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.263761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.263770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.264079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.264088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.264393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.264402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.264707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.264718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.265002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.265012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.265306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.265316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.265628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.265637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.265925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.265934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.266238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.266248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.266539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.266548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.266856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.266866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.267179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.267189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.267497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.267506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.267800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.267812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.268120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.268130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.268429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.268437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.268758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.268767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.269090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.269099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.269411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.269419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.269727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.269737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.270067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.270076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.270367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.270377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.270716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.270724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.271030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.271039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.271216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.271224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.271554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.271564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.271875] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.271885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.272192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.272201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.792 qpair failed and we were unable to recover it. 00:29:28.792 [2024-11-05 04:40:42.272414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.792 [2024-11-05 04:40:42.272423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.272719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.272729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.273048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.273057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.273366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.273374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.273684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.273692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.274020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.274030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.274333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.274342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.274682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.274692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.275024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.275034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.275326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.275335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.275644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.275653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.275861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.275870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.276186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.276195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.276498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.276507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.276812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.276821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.276991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.277000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.277335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.277343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.277672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.277682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.277987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.277996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.278378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.278387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.278660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.278668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.278969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.278978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.279291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.279301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.279606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.279614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.279921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.279930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.280221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.280229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.280542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.280551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.280858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.280867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.281188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.281197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.281486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.281494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.281806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.281815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.282128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.282137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.282451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.282461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.282767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.282776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.283087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.283097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.283399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.283408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.283712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.283722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.284032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.284042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.793 [2024-11-05 04:40:42.284349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.793 [2024-11-05 04:40:42.284359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.793 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.284664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.284674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.284987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.284998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.285284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.285293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.285616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.285627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.285942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.285952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.286255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.286265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.286574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.286583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.286896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.286906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.287253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.287263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.287566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.287574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.287882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.287891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.288201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.288212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.288521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.288529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.288814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.288823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.289123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.289132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.289448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.289456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.289644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.289652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.289957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.289966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.290274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.290284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.290593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.290602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.290827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.290836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.291133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.291142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.291444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.291454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.291642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.291651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.291975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.291985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.292288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.292297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.292601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.292610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.292926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.292935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.293260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.293270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.293581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.293589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.293937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.293947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.294266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.294275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.294585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.294595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.294903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.294912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.295200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.295209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.295517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.295525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.295833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.794 [2024-11-05 04:40:42.295842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.794 qpair failed and we were unable to recover it. 00:29:28.794 [2024-11-05 04:40:42.296159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.296167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.296489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.296499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.296806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.296814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.297177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.297185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.297490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.297498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.297791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.297800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.297967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.297978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.298301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.298310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.298624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.298634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.298951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.298960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.299268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.299276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.299585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.299594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.299901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.299910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.300201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.300209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.300514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.300522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.300811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.300819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.301006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.301014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.301336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.301346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.301652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.301660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.301971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.301980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.302363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.302371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.302698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.302715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.303014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.303023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.303329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.303339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.303510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.303520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.303841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.303850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.304156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.304164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.304506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.304515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.304771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.304780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.305114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.305123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.305425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.305434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.305743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.305757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.306028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.306037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.306330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.795 [2024-11-05 04:40:42.306339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.795 qpair failed and we were unable to recover it. 00:29:28.795 [2024-11-05 04:40:42.306638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.306647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.306922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.306931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.307258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.307266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.307558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.307566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.307874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.307883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.308192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.308201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.308508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.308517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.308808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.308817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.309131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.309141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.309450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.309459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.309650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.309659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.309924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.309933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.310243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.310253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.310558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.310567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.310873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.310881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.311172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.311180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.311487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.311496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.311827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.311835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.312164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.312173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.312469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.312477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.312781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.312789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.313134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.313142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.313450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.313458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.313650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.313658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.314042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.314051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.314353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.314362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.314670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.314679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.314987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.314995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.315304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.315314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.315620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.315629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.315924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.315933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.316245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.316254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.316560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.316569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.316881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.316890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.317212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.317220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.317528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.317536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.317845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.317854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.318163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.318172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.796 qpair failed and we were unable to recover it. 00:29:28.796 [2024-11-05 04:40:42.318483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.796 [2024-11-05 04:40:42.318492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.318865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.318874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.319083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.319091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.319255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.319264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.319568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.319576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.319868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.319877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.320203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.320211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.320521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.320529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.320851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.320859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.321159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.321167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.321474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.321483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.321785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.321794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.322108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.322116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.322447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.322456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.322766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.322776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.323094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.323104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.323410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.323420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.323701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.323710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.324084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.324094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.324394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.324404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.324754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.324764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.325084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.325093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.325401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.325419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.325626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.325634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.325917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.325925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.326221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.326230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.326536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.326544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.326868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.326877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.327176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.327185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.327547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.327555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.327863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.327873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.328157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.328166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.328515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.328524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.328729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.328738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.329025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.329034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.329354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.329362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.329714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.329723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.330027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.330035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.797 [2024-11-05 04:40:42.330392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.797 [2024-11-05 04:40:42.330400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.797 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.330704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.330712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.331029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.331038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.331217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.331225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.331420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.331429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.331708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.331716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.331902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.331911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.332075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.332084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.332253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.332261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.332565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.332574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.332903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.332912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.333235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.333243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.333563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.333572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.333861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.333870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.334181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.334189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.334527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.334536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.334909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.334919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.335215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.335224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.335529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.335538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.335869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.335878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.336199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.336209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.336562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.336570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.336879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.336887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.337198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.337206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.337536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.337544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.337862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.337871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.338227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.338236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.338579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.338587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.338948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.338957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.339260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.339269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.339430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.339439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.339735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.339744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.340100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.340108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.340417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.340425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.340748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.340757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.341079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.341087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.341470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.341478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.341789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.341798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.341991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.798 [2024-11-05 04:40:42.341999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.798 qpair failed and we were unable to recover it. 00:29:28.798 [2024-11-05 04:40:42.342322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.342332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.342650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.342658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.342986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.342995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.343154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.343163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.343494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.343503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.343685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.343695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.344021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.344030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.344337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.344347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.344644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.344654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.344846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.344855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.345178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.345187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.345379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.345389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.345556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.345566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.345934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.345943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.346260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.346269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.346428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.346437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.346782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.346790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.347006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.347017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.347348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.347357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.347689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.347697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.348007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.348015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.348326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.348334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.348640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.348648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.348978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.348988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.349172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.349181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.349489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.349497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.349807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.349816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.350118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.350126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.350461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.350469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.350772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.350780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.351099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.351108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.351413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.351423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.351755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.351765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.799 [2024-11-05 04:40:42.352076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.799 [2024-11-05 04:40:42.352085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.799 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.352273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.352282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.352627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.352636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.352935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.352944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.353256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.353265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.353566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.353574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.353884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.353893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.354233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.354242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.354549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.354557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.354866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.354874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.355187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.355195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.355395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.355403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.355603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.355612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.355949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.355958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.356262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.356270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.356564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.356572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.356916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.356925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.357128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.357136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.357454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.357461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.357773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.357782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.358096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.358105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.358286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.358295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.358579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.358588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.358901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.358911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.359136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.359146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.359386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.359395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.359589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.359598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.359885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.359895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.360218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.360227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.360400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.360408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.360744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.360755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.361079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.361087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.361371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.361380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.361698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.361706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.361986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.361994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.362284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.362292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.362478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.362486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.362672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.362680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.362995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.800 [2024-11-05 04:40:42.363005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.800 qpair failed and we were unable to recover it. 00:29:28.800 [2024-11-05 04:40:42.363186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.363195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.363475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.363483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.363791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.363800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.364134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.364144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.364442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.364451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.364762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.364770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.364976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.364984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.365175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.365183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.365463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.365471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.365657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.365663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.365968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.365976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.366139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.366148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.366471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.366479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.366853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.366862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.367061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.367069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.367418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.367425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.367596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.367604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.367895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.367904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.368094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.368103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.368420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.368429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.368610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.368619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.368906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.368914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.369265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.369274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.369584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.369592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.369908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.369916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.370257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.370267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.370591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.370599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.370913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.370921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.371252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.371261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.371594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.371603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.371920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.371928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.372294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.372303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.372614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.372623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.372957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.372967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.373276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.373285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.373603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.373612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.373807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.373816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.374190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.801 [2024-11-05 04:40:42.374199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.801 qpair failed and we were unable to recover it. 00:29:28.801 [2024-11-05 04:40:42.374508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.374518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.374829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.374838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.375159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.375168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.375485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.375494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.375806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.375815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.376112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.376120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.376337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.376345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.376610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.376618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.376913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.376922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.377174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.377182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.377482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.377490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.377767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.377776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.378081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.378089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.378371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.378379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.378682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.378690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.378981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.378990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.379299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.379307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.379478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.379487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.379827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.379836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.380162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.380171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.380494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.380502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.380785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.380794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.381092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.381099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.381424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.381432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.381751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.381760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.382067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.382075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.382413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.382423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.382688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.382698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.383019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.383028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.383357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.383366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.383700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.383709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.384031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.384040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.384352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.384361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.384533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.384542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.384594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.384601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.384889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.384898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.385222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.385231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.385407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.385417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.802 qpair failed and we were unable to recover it. 00:29:28.802 [2024-11-05 04:40:42.385732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.802 [2024-11-05 04:40:42.385741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.386089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.386098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.386440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.386448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.386773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.386783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.387111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.387119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.387280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.387288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.387643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.387651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.387958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.387968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.388155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.388164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.388376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.388384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.388707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.388715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.389053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.389061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.389361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.389368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.389670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.389678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.389995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.390004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.390319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.390328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.390635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.390644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.390822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.390831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.391188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.391196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.391429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.391437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.391755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.391763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.392077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.392085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.392380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.392389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.392697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.392705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.393058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.393066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.393357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.393365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.393676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.393684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.393991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.394000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.394292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.394301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.394615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.394626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.394808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.394817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.395147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.395155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.395467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.395476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.395771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.395780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.396075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.396084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.396413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.396421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.396595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.396603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.396788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.396797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.803 [2024-11-05 04:40:42.397085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.803 [2024-11-05 04:40:42.397094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.803 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.397405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.397413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.397726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.397734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.398035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.398044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.398353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.398361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.398687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.398696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.399005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.399014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.399307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.399315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.399629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.399638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.399921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.399930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.400242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.400250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.400563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.400571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.400881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.400889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.401199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.401208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.401398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.401406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.401731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.401740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.402039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.402047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.402357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.402366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.402674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.402683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.402990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.402999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.403314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.403323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.403631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.403640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.403926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.403935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.404288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.404297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.404604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.404614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.404824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.404833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.405210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.405219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.405527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.405535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.405724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.405732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.406073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.406083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:28.804 [2024-11-05 04:40:42.406390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:28.804 [2024-11-05 04:40:42.406398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:28.804 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.406686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.406698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.407004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.407014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.407320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.407329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.407634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.407642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.407959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.407968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.408272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.408280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.408520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.408528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.408858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.408866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.409219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.409228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.409535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.409544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.409783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.409791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.410070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.410079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.410371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.410380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.410705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.410715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.411075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.411084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.411384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.411392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.411568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.411577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.411847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.411857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.412050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.412060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.412348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.412358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.412659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.412668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.413008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.413017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.413319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.413329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.413631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.413640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.413959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.413968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.414282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.414292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.414604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.414612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.414924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.414934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.415244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.415252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.415558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.415567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.415867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.415876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.416193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.416201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.416490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.416498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.416816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.416824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.079 [2024-11-05 04:40:42.417017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.079 [2024-11-05 04:40:42.417024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.079 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.417346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.417354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.417642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.417651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.417931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.417939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.418263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.418272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.418576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.418583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.418774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.418785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.419067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.419076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.419384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.419393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.419701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.419709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.420059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.420068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.420377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.420385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.420660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.420669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.420975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.420984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.421270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.421278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.421584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.421594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.421901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.421910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.422260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.422268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.422561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.422569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.422891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.422899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.423229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.423237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.423548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.423556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.423885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.423893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.424210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.424218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.424529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.424538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.424710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.424719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.425011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.425020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.425331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.425339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.425645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.425654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.426003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.426012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.426379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.426388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.426687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.426696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.427014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.427024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.427327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.427337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.427641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.427650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.427955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.427965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.428263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.428272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.428580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.428589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.080 qpair failed and we were unable to recover it. 00:29:29.080 [2024-11-05 04:40:42.428894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.080 [2024-11-05 04:40:42.428903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.429209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.429219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.429523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.429533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.429810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.429820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.430199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.430208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.430509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.430518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.430822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.430831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.431151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.431159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.431462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.431470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.431778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.431787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.432138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.432147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.432457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.432466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.432759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.432768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.432943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.432951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.433238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.433246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.433556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.433564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.433877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.433885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.434207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.434215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.434538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.434548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.434863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.434871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.435184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.435192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.435501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.435509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.435668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.435676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.435893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.435902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.436218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.436226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.436608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.436617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.436830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.436839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.437165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.437173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.437497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.437506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.437813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.437821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.438132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.438141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.438386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.438394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.438719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.438728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.439101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.439110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.439411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.439419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.439717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.439730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.440026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.440034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.440340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.440349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.081 [2024-11-05 04:40:42.440657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.081 [2024-11-05 04:40:42.440666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.081 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.440966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.440975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.441265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.441274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.441580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.441588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.441760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.441770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.441930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.441939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.442214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.442222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.442509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.442517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.442812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.442820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.443139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.443147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.443446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.443454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.443748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.443757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.444128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.444136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.444446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.444454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.444763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.444771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.445075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.445084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.445397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.445406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.445711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.445719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.445884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.445894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.446201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.446209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.446492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.446500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.446823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.446831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.447042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.447050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.447429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.447437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.447737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.447744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.448031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.448039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.448223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.448231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.448541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.448549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.448855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.448864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.449172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.449180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.449470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.449479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.449791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.449801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.450107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.450115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.450419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.450429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.450764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.450773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.451054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.451062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.451366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.451375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.451665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.451674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.082 [2024-11-05 04:40:42.451989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.082 [2024-11-05 04:40:42.451997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.082 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.452283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.452291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.452593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.452601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.452944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.452954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.453260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.453268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.453574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.453583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.453758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.453766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.454060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.454068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.454340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.454348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.454672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.454681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.454988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.454996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.455307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.455316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.455623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.455631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.455986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.455995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.456304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.456313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.456618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.456627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.456921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.456929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.457237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.457245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.457514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.457523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.457801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.457810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.458114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.458122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.458455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.458463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.458772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.458780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.459091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.459099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.459407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.459416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.459610] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.459618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.459932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.459941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.460100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.460110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.460432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.460440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.460790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.460799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.461118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.461126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.461434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.461442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.461751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.083 [2024-11-05 04:40:42.461759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.083 qpair failed and we were unable to recover it. 00:29:29.083 [2024-11-05 04:40:42.462065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.462074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.462383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.462390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.462695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.462704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.463009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.463017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.463306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.463313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.463621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.463629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.463945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.463955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.464251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.464259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.464532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.464539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.464843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.464851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.465161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.465170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.465499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.465508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.465801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.465810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.466226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.466234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.466540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.466548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.466736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.466744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.467086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.467094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.467399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.467407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.467709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.467717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.467999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.468007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.468295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.468303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.468612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.468620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.468921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.468930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.469248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.469256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.469543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.469550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.469743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.469754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.470045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.470055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.470358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.470366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.470672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.470679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.470956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.470964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.471271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.471280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.471588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.471596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.471888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.471896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.472192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.472200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.472469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.472477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.472780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.472788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.473073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.473081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.473391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.473400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.084 qpair failed and we were unable to recover it. 00:29:29.084 [2024-11-05 04:40:42.473726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.084 [2024-11-05 04:40:42.473735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.474030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.474038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.474328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.474336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.474614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.474623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.474921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.474930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.475245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.475254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.475548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.475556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.475863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.475871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.476187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.476197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.476500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.476509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.476802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.476811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.477137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.477146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.477452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.477461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.477789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.477798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.478143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.478151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.478452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.478460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.478763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.478771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.479078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.479087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.479359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.479367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.479537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.479546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.479854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.479863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.480167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.480175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.480486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.480494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.480799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.480807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.481111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.481119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.481431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.481440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.481755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.481764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.481925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.481934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.482247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.482255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.482579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.482588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.482922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.482930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.483236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.483245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.483546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.483555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.483849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.483858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.484170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.484177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.484382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.484389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.484549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.484558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.484916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.085 [2024-11-05 04:40:42.484924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.085 qpair failed and we were unable to recover it. 00:29:29.085 [2024-11-05 04:40:42.485207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.485216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.485521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.485529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.485835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.485844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.486146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.486154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.486476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.486484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.486795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.486803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.487130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.487138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.487441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.487450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.487814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.487823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.488117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.488126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.488429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.488439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.488744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.488761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.489064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.489073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.489355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.489364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.489666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.489674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.489980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.489990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.490274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.490282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.490476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.490484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.490785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.490793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.490953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.490962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.491135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.491144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.491460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.491468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.491768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.491777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.492092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.492100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.492417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.492425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.492772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.492781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.493087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.493095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.493397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.493406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.493696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.493704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.494008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.494017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.494288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.494296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.494625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.494633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.494972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.494981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.495288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.495295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.495582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.495589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.495886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.495895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.496212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.496220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.496526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.086 [2024-11-05 04:40:42.496535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-05 04:40:42.496808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.496816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.497123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.497131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.497289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.497297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.497501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.497509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.497828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.497837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.498019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.498027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.498224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.498232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.498500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.498507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.498823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.498831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.499146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.499154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.499518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.499526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.499837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.499846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.500158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.500169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.500476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.500485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.500773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.500782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.501095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.501103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.501409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.501418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.501723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.501731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.501999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.502008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.502335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.502343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.502703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.502711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.503012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.503021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.503189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.503199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.503472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.503480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.503703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.503711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.504053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.504062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.504368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.504377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.504680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.504688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.505020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.505029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.505329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.505336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.505645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.505653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.505968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.505977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.506278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.506287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.506591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.506600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.506890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.506898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.507203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.507212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.507516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.507525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.507821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.507830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.087 qpair failed and we were unable to recover it. 00:29:29.087 [2024-11-05 04:40:42.508165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.087 [2024-11-05 04:40:42.508173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.508512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.508520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.508825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.508834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.509148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.509156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.509443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.509451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.509757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.509765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.510073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.510081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.510386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.510394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.510684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.510693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.510999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.511007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.511303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.511312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.511623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.511631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.511915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.511924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.512267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.512274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.512577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.512586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.512874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.512883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.513218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.513227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.513527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.513535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.513889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.513898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.514084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.514091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.514387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.514395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.514699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.514708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.515012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.515020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.515323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.515332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.515624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.515633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.515915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.515924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.516226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.516235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.516545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.516553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.516836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.516845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.517140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.517148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.517442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.517452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.517755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.517764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.517941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.517949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.518268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.088 [2024-11-05 04:40:42.518276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.088 qpair failed and we were unable to recover it. 00:29:29.088 [2024-11-05 04:40:42.518580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.518590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.518890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.518898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.519235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.519244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.519545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.519554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.519868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.519876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.520198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.520206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.520495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.520503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.520806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.520815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.521124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.521132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.521434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.521441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.521731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.521740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.522060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.522068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.522338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.522346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.522614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.522623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.522913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.522921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.523239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.523248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.523551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.523559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.523868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.523877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.524156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.524164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.524480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.524489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.524821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.524832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.525156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.525165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.525453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.525461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.525765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.525773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.526051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.526060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.526364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.526372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.526659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.526666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.526988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.526996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.527303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.527312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.527619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.527627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.527929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.527937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.528246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.528254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.528560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.528569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.528872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.528880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.529218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.529227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.529537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.529546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.529857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.529865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.530175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.089 [2024-11-05 04:40:42.530184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.089 qpair failed and we were unable to recover it. 00:29:29.089 [2024-11-05 04:40:42.530469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.530478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.530769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.530778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.531095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.531103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.531409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.531417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.531704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.531712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.532016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.532026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.532325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.532333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.532639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.532648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.532971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.532980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.533286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.533295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.533568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.533576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.533881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.533891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.534198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.534207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.534512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.534521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.534808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.534817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.535131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.535139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.535429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.535439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.538757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.538775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.539165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.539175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.539526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.539536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.539854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.539867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.540189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.540203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.540512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.540526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.540896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.540908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.541213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.541225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.541550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.541560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.541867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.541877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.542183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.542192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.542482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.542490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.542770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.542779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.543069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.543078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.543251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.543260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.543616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.543624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.543947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.543955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.544279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.544287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.544597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.544605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.544891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.544900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.545204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.545213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.090 qpair failed and we were unable to recover it. 00:29:29.090 [2024-11-05 04:40:42.545518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.090 [2024-11-05 04:40:42.545526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.545814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.545823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.546157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.546166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.546467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.546475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.546779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.546788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.547110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.547118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.547407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.547415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.547719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.547727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.548027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.548037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.548346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.548355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.548655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.548664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.548941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.548949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.549257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.549267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.549539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.549548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.549756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.549764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.550059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.550068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.550371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.550380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.550693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.550701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.551006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.551014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.551323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.551331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.551641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.551650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.551840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.551849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.552130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.552138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.552431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.552440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.552772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.552783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.553097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.553105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.553392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.553401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.553712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.553720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.554071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.554080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.554387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.554396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.554700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.554708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.554895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.554904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.555193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.555201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.555509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.555518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.555851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.555860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.556161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.556170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.556474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.556482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.556789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.556798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.557115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.091 [2024-11-05 04:40:42.557123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.091 qpair failed and we were unable to recover it. 00:29:29.091 [2024-11-05 04:40:42.557427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.557436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.557779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.557789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.558095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.558103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.558402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.558411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.558719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.558727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.558902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.558910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.559181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.559189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.559475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.559482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.559833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.559841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.560056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.560063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.560366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.560374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.560669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.560678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.560994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.561003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.561309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.561318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.561526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.561534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.561710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.561718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.562035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.562044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.562354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.562363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.562668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.562677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.562983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.562992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.563300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.563310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.563660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.563670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.563970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.563980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.564277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.564286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.564593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.564602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.564913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.564923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.565219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.565228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.565552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.565560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.565877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.565887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.566197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.566205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.566510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.566519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.566829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.566837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.567145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.567154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.092 [2024-11-05 04:40:42.567459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.092 [2024-11-05 04:40:42.567467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.092 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.567762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.567770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.567954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.567962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.568285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.568294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.568588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.568597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.568903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.568911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.569233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.569242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.569548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.569556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.569866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.569874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.570185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.570194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.570516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.570524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.570822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.570830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.571081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.571089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.571457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.571465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.571782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.571790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.572092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.572100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.572432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.572440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.572643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.572651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.572919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.572927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.573248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.573257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.573621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.573629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.573930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.573939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.574238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.574246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.574585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.574594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.574906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.574914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.575253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.575261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.575453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.575460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.575721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.575728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.576101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.576109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.576279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.576287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.576583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.576591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.576901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.576909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.577216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.577225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.577421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.577429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.577728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.577736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.578033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.578043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.578302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.578311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.578603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.578612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.093 [2024-11-05 04:40:42.578950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.093 [2024-11-05 04:40:42.578959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.093 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.579260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.579268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.579573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.579581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.579862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.579871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.580127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.580135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.580488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.580496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.580828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.580837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.581160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.581168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.581523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.581531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.581811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.581819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.581997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.582006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.582278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.582286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.582577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.582585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.582899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.582907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.583210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.583220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.583526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.583534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.583872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.583882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.584177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.584186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.584460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.584468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.584782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.584790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.585200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.585208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.585558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.585566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.585874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.585882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.586204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.586212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.586505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.586513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.586695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.586704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.587009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.587018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.587334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.587343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.587502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.587512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.587796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.587805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.588145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.588154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.588459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.588467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.588627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.588636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.588924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.588932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.589251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.589260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.589572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.589580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.589769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.589777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.589945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.589954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.094 [2024-11-05 04:40:42.590277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.094 [2024-11-05 04:40:42.590285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.094 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.590612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.590621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.590922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.590931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.591109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.591117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.591394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.591402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.591594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.591603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.591763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.591772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.592033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.592041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.592389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.592397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.592702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.592710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.593022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.593031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.593213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.593222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.593465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.593474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.593741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.593759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.594101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.594110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.594286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.594294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.594433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.594442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.594665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.594674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.594973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.594982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.595325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.595334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.595641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.595651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.595939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.595948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.596245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.596254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.596609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.596619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.596918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.596927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.597197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.597206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.597527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.597536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.597845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.597853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.598159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.598167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.598438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.598446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.598730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.598738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.599018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.599026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.599331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.599340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.599619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.599626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.599933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.599941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.600289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.600298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.600605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.600613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.600966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.600975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.601268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.095 [2024-11-05 04:40:42.601275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.095 qpair failed and we were unable to recover it. 00:29:29.095 [2024-11-05 04:40:42.601582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.601591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.601771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.601781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.601949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.601958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.602274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.602282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.602586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.602594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.602909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.602918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.603217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.603225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.603607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.603616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.603902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.603911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.604089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.604097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.604296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.604304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.604491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.604499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.604779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.604788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.605105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.605114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.605431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.605439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.605775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.605784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.605947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.605955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.606259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.606267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.606578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.606586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.606882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.606890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.607148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.607156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.607467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.607475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.607649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.607656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.607966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.607975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.608183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.608193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.608524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.608531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.608842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.608850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.609165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.609173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.609345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.609353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.609567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.609576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.609864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.609873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.610156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.610164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.610470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.610478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.610673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.610682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.610966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.610974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.611286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.611296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.611623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.611631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.611912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.096 [2024-11-05 04:40:42.611920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.096 qpair failed and we were unable to recover it. 00:29:29.096 [2024-11-05 04:40:42.612226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.612234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.612567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.612575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.612881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.612889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.613282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.613291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.613596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.613604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.613890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.613899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.614221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.614229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.614544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.614552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.614857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.614866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.615057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.615066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.615407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.615416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.615602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.615610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.615791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.615799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.616085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.616094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.616405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.616414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.616750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.616758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.617058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.617067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.617387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.617396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.617719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.617727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.617977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.617986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.618305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.618313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.618614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.618622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.618910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.618918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.619096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.619104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.619294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.619302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.619636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.619643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.619957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.619968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.620139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.620148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.620467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.620476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.620664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.620672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.620958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.620966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.621194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.621202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.621527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.621535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.621831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.097 [2024-11-05 04:40:42.621839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.097 qpair failed and we were unable to recover it. 00:29:29.097 [2024-11-05 04:40:42.622017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.622026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.622337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.622345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.622771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.622780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.622980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.622988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.623284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.623292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.623623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.623632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.623815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.623824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.624152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.624161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.624469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.624478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.624663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.624672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.624956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.624966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.625278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.625287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.625599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.625607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.625956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.625965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.626131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.626139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.626417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.626426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.626784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.626792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.627108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.627117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.627429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.627437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.627773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.627782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.628103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.628111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.628437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.628445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.628724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.628733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.629064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.629073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.629383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.629392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.629759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.629768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.629942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.629951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.630334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.630343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.630536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.630543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.630859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.630867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.631169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.631178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.631467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.631475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.631646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.631655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.631989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.631997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.632305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.632314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.632603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.632612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.632918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.632926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.098 qpair failed and we were unable to recover it. 00:29:29.098 [2024-11-05 04:40:42.633238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.098 [2024-11-05 04:40:42.633247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.633440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.633448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.633749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.633758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.634031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.634039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.634353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.634361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.634703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.634711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.634890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.634898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.635088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.635096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.635409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.635417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.635727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.635737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.636103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.636112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.636423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.636433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.636732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.636740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.637061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.637070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.637401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.637409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.637725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.637733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.638033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.638043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.638348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.638356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.638652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.638661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.639017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.639026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.639335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.639344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.639654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.639662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.639992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.640001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.640249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.640257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.640581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.640590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.640630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.640638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.640934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.640942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.641235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.641244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.641439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.641447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.641754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.641763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.642054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.642062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.642399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.642407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.642769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.642778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.643110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.643118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.643434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.643442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.643806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.643817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.644151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.644159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.644368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.644375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.099 qpair failed and we were unable to recover it. 00:29:29.099 [2024-11-05 04:40:42.644649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.099 [2024-11-05 04:40:42.644657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.644995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.645003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.645360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.645368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.645656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.645664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.645968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.645976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.646270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.646279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.646579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.646587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.646910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.646920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.647227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.647235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.647522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.647531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.647814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.647822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.648146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.648155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.648457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.648465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.648793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.648802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.649096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.649104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.649409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.649418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.649730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.649738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.650046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.650055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.650357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.650365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.650665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.650675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.650987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.650995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.651285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.651294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.651591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.651599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.651910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.651919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.652219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.652228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.652516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.652524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.652733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.652741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.653049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.653057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.653360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.653369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.653659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.653668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.653972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.653980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.654285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.654294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.654597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.654606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.654781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.654791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.655062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.655070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.655354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.655363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.655678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.655686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.655982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.100 [2024-11-05 04:40:42.655993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.100 qpair failed and we were unable to recover it. 00:29:29.100 [2024-11-05 04:40:42.656299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.656307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.656620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.656628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.656831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.656839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.657180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.657189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.657498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.657506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.657833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.657841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.658155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.658164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.658455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.658464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.658761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.658770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.659068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.659077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.659381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.659389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.659676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.659685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.659991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.660000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.660305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.660314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.660617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.660626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.660932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.660940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.661255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.661264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.661568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.661577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.661883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.661892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.662177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.662186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.662493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.662501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.662826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.662835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.663164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.663172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.663474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.663483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.663788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.663796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.664118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.664126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.664418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.664426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.664757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.664765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.665044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.665053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.665208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.665217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.665471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.665479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.665768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.665776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.666102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.666110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.666430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.101 [2024-11-05 04:40:42.666438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.101 qpair failed and we were unable to recover it. 00:29:29.101 [2024-11-05 04:40:42.666749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.666758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.667041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.667049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.667376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.667384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.667707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.667715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.668028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.668037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.668321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.668332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.668635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.668643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.668859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.668867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.669195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.669204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.669477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.669486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.669804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.669812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.670117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.670125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.670428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.670436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.670724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.670733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.671008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.671017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.671324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.671333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.671609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.671617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.671907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.671917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.672231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.672240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.672546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.672555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.672860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.672868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.673173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.673182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.673484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.673492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.673671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.673679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.673976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.673985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.674271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.674279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.674587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.674595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.674894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.674904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.675196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.675203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.675491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.675500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.675692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.675700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.676128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.676136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.676438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.676447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.676771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.676780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.677090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.677098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.677402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.677411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.677714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.677722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.102 qpair failed and we were unable to recover it. 00:29:29.102 [2024-11-05 04:40:42.678048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.102 [2024-11-05 04:40:42.678057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.678356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.678364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.678684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.678692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.678998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.679006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.679346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.679354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.679564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.679572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.679865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.679873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.680181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.680190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.680469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.680479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.680791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.680800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.681182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.681191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.681501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.681510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.681844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.681852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.682216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.682224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.682524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.682533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.682818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.682826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.683178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.683186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.683356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.683365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.683691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.683699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.684080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.684088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.684396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.684405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.684708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.684716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.685011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.685021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.685322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.685330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.685616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.685625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.685922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.685931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.686250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.686258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.686572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.686579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.686873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.686881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.687188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.687196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.687497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.687505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.687812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.687822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.688159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.688167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.688548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.688557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.688854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.688862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.689177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.689185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.689507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.689515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.689810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.103 [2024-11-05 04:40:42.689818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.103 qpair failed and we were unable to recover it. 00:29:29.103 [2024-11-05 04:40:42.690137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.690146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.690452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.690460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.690639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.690646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.690960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.690968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.691316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.691324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.691618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.691627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.691913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.691921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.692234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.692243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.692550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.692558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.692865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.692875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.693176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.693186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.693494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.693502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.693740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.693757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.694061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.694069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.694350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.694358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.694664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.694672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.694979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.694989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.695299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.695307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.695629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.695636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.695911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.695920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.696241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.696249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.696639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.696648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.696966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.696977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.697297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.697306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.697611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.697620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.697913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.697922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.698224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.698233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.698539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.698547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.698865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.698874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.699068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.699077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.699377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.699386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.699690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.699698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.700016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.700024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.700364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.700373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.700666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.700675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.700984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.700994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.701298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.701307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.701652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.701661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.104 qpair failed and we were unable to recover it. 00:29:29.104 [2024-11-05 04:40:42.701856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.104 [2024-11-05 04:40:42.701866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-11-05 04:40:42.702153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-11-05 04:40:42.702161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-11-05 04:40:42.702452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-11-05 04:40:42.702461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.105 [2024-11-05 04:40:42.702766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.105 [2024-11-05 04:40:42.702775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.105 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.703124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.703134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.703428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.703437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.703620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.703628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.703913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.703921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.704246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.704254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.704558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.704568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.704883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.704891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.705204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.705212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.705536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.705546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.705842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.705852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.706152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.706160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.706467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.706476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.706805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.706813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.707118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.707127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.707406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.707414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.707733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.707741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.708029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.708037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.708339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.708347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.708656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.708665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.708971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.708979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.709273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.709282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.709593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.709602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.709910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.709918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.710091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.710099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.710439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.710447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.710756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.710766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.711068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.711076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.711259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.711267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.711579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.711587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.711893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.711903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.382 [2024-11-05 04:40:42.712130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.382 [2024-11-05 04:40:42.712138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.382 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.712338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.712346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.712699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.712707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.712857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.712865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.713049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.713057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.713433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.713441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.713766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.713774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.713974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.713982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.714279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.714288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.714627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.714636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.714962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.714970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.715272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.715281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.715477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.715486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.715755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.715764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.716078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.716087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.716391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.716400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.716709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.716717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.717039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.717047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.717376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.717387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.717693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.717701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.717998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.718007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.718308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.718317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.718660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.718668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.718981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.718991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.719293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.719301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.719609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.719618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.719943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.719952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.720311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.720319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.720628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.720637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.720911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.720920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.721221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.721229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.721534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.721543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.721845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.721854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.722170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.722178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.722352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.722360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.722678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.722686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.722867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.722875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.723199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.723207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.383 qpair failed and we were unable to recover it. 00:29:29.383 [2024-11-05 04:40:42.723533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.383 [2024-11-05 04:40:42.723541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.723810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.723818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.724100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.724108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.724412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.724421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.724711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.724719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.725027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.725037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.725226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.725234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.725546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.725554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.725853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.725861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.726207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.726216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.726521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.726530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.726830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.726840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.727154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.727164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.727458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.727467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.727813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.727823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.728142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.728151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.728336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.728346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.728650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.728660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.728956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.728965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.729273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.729282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.729566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.729577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.729887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.729896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.730214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.730223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.730522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.730531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.730810] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.730819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.731134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.731143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.731447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.731456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.731778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.731787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.732109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.732117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.732427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.732436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.732738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.732751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.733008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.733016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.733351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.733359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.733693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.733701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.734009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.734018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.734323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.734331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.734621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.734629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.735010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.735019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.384 [2024-11-05 04:40:42.735313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.384 [2024-11-05 04:40:42.735322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.384 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.735635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.735642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.736008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.736016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.736318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.736327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.736636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.736644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.736958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.736967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.737304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.737312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.737493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.737501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.737788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.737796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.738110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.738119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.738396] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.738404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.738710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.738719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.739026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.739035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.739339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.739348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.739633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.739641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.739924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.739932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.740249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.740256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.740567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.740576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.740792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.740801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.741129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.741138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.741441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.741449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.741753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.741762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.742042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.742051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.742356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.742365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.742669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.742677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.742956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.742966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.743289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.743297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.743603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.743611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.743912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.743921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.744224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.744233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.744537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.744545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.744857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.744866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.745169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.745177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.745482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.745491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.745814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.745822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.746134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.746142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.746304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.746314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.746602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.746611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.385 qpair failed and we were unable to recover it. 00:29:29.385 [2024-11-05 04:40:42.746909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.385 [2024-11-05 04:40:42.746917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.747223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.747232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.747547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.747555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.747867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.747875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.748181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.748190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.748497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.748505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.748815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.748823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.749186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.749194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.749533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.749541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.749861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.749870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.750197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.750205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.750512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.750522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.750845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.750854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.751063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.751071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.751376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.751384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.751738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.751750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.752053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.752062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.752371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.752380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.752682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.752691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.752982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.752991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.753284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.753293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.753593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.753602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.753796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.753804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.754082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.754090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.754381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.754389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.754687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.754697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.755069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.755078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.755389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.755397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.755574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.755583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.755896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.755904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.756256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.756263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.756567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.756576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.386 qpair failed and we were unable to recover it. 00:29:29.386 [2024-11-05 04:40:42.756937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.386 [2024-11-05 04:40:42.756946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.757256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.757264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.757566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.757574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.757954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.757962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.758273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.758281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.758582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.758590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.758756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.758765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.759156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.759164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.759488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.759497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.759802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.759810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.760092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.760100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.760422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.760430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.760719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.760728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.761016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.761024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.761345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.761354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.761655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.761663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.761998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.762007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.762311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.762319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.762625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.762633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.762912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.762922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.763224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.763232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.763543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.763552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.763855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.763863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.764174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.764183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.764506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.764514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.764865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.764873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.765052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.765060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.765340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.765348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.765638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.765646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.765923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.765932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.766251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.766260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.766560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.766569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.766759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.766769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.767047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.767056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.767400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.767409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.767713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.767722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.768047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.768055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.768380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.768388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.387 qpair failed and we were unable to recover it. 00:29:29.387 [2024-11-05 04:40:42.768587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.387 [2024-11-05 04:40:42.768595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.768855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.768864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.769047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.769055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.769360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.769369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.769577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.769585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.769778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.769787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.770064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.770072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.770375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.770384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.770689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.770697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.771009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.771018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.771341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.771349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.771656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.771665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.771969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.771978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.772290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.772299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.772585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.772594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.772784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.772792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.773081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.773089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.773393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.773402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.773579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.773587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.773856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.773864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.774177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.774185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.774497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.774508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.774849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.774857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.775160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.775169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.775484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.775492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.775797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.775806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.776118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.776126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.776433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.776442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.776751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.776759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.777044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.777053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.777337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.777345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.777654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.777663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.777963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.777972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.778276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.778285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.778590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.778599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.778904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.778913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.779217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.779226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.779526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.779534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.388 qpair failed and we were unable to recover it. 00:29:29.388 [2024-11-05 04:40:42.779859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.388 [2024-11-05 04:40:42.779869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.780178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.780186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.780488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.780497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.780798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.780807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.781009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.781016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.781319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.781327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.781501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.781509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.781850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.781859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.782037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.782044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.782417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.782425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.782733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.782742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.783074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.783082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.783369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.783378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.783684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.783692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.783969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.783979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.784280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.784288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.784480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.784487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.784796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.784804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.785126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.785136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.785447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.785455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.785782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.785791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.786113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.786121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.786283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.786291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.786629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.786639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.786917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.786925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.787246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.787254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.787559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.787568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.787915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.787923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.788205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.788214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.788543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.788550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.788742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.788754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.788891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.788899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.789125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.789133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.789401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.789409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.789738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.789750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.789854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.789861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.790125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.790135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.790473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.790481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.389 [2024-11-05 04:40:42.790652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.389 [2024-11-05 04:40:42.790661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.389 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.790988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.790996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.791068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.791075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.791354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.791362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.791676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.791685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.791859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.791867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.792152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.792160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.792485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.792493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.792802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.792810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.793118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.793125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.793426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.793434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.793724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.793732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.794070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.794079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.794382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.794390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.794694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.794702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.794888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.794896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.795234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.795243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.795554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.795563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.795850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.795859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.796178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.796187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.796491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.796499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.796801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.796811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.797009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.797017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.797347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.797355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.797659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.797666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.797985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.797995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.798330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.798339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.798623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.798632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.798912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.798920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.799240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.799248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.799432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.799440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.799757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.799766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.800095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.800103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.800446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.800454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.800652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.800660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.800966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.800975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.801282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.801291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.801620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.801628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.801835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.390 [2024-11-05 04:40:42.801843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.390 qpair failed and we were unable to recover it. 00:29:29.390 [2024-11-05 04:40:42.802133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.802142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.802449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.802458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.802770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.802778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.803042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.803050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.803356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.803364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.803667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.803676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.803983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.803991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.804301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.804309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.804633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.804642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.804936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.804945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.805289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.805299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.805608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.805617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.805921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.805929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.806233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.806242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.806546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.806554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.806857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.806866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.807190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.807198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.807537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.807545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.807840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.807848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.808153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.808161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.808445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.808462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.808768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.808776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.809099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.809107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.809410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.809418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.809744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.809755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.810074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.810082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.810386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.810396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.810698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.810707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.811033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.811041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.811344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.811353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.811657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.811666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.811978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.811987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.812311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.812321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.391 [2024-11-05 04:40:42.812622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.391 [2024-11-05 04:40:42.812631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.391 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.812936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.812946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.813225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.813234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.813558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.813567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.813874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.813883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.814198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.814206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.814510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.814520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.814808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.814817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.815138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.815147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.815447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.815455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.815756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.815764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.816051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.816059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.816293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.816302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.816611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.816619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.816917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.816925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.817234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.817242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.817547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.817556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.817861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.817869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.818169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.818178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.818471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.818479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.818774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.818784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.819087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.819096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.819400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.819409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.819706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.819715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.820022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.820031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.820333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.820341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.820704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.820712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.821020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.821029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.821336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.821344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.821643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.821652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.821966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.821975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.822176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.822184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.822391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.822399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.822729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.822739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.823073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.823082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.823385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.823393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.823603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.823611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.823874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.823883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.824084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.392 [2024-11-05 04:40:42.824092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.392 qpair failed and we were unable to recover it. 00:29:29.392 [2024-11-05 04:40:42.824420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.824429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.824740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.824753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.825058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.825067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.825372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.825380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.825705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.825714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.825895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.825903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.826178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.826186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.826501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.826509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.826738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.826751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.827054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.827062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.827370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.827378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.827683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.827691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.827996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.828005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.828310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.828319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.828511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.828519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.828839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.828847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.829148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.829157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.829476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.829483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.829816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.829824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.830106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.830114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.830317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.830324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.830648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.830656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.830967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.830976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.831189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.831196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.831512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.831520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.831830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.831838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.832175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.832183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.832374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.832383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.832539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.832547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.832755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.832764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.833048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.833056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.833394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.833404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.833699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.833708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.834021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.834029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.834332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.834342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.834510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.834519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.834724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.834732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.835038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.393 [2024-11-05 04:40:42.835048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.393 qpair failed and we were unable to recover it. 00:29:29.393 [2024-11-05 04:40:42.835353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.835362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.835551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.835560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.835861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.835869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.836182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.836191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.836513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.836521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.836831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.836840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.837129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.837137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.837311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.837319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.837599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.837607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.837789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.837797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.838124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.838132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.838301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.838309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.838618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.838626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.838912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.838920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.839219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.839227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.839546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.839555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.839863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.839873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.840196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.840204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.840495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.840504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.840814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.840823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.841141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.841149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.841454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.841463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.841673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.841680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.841972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.841981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.842291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.842300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.842472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.842481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.842789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.842797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.843120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.843128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.843476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.843483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.843679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.843686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.843976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.843984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.844292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.844299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.844606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.844614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.844915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.844931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.845261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.845269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.845580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.845589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.845901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.845911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.846226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.394 [2024-11-05 04:40:42.846235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.394 qpair failed and we were unable to recover it. 00:29:29.394 [2024-11-05 04:40:42.846552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.846561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.846870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.846879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.847113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.847121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.847423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.847431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.847658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.847667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.847974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.847983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.848291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.848299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.848607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.848616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.848920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.848928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.849243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.849252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.849546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.849555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.849862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.849871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.850212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.850220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.850411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.850419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.850732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.850741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.851018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.851026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.851237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.851245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.851570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.851578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.851757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.851765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.852068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.852076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.852414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.852423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.852732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.852741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.853045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.853054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.853375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.853382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.853689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.853698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.853982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.853991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.854297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.854306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.854491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.854500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.854828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.854837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.855146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.855155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.855452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.855461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.855772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.855782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.856105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.856113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.395 [2024-11-05 04:40:42.856420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.395 [2024-11-05 04:40:42.856429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.395 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.856739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.856750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.857067] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.857075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.857440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.857448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.857751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.857760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.858080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.858090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.858398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.858407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.858708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.858716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.859028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.859037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.859345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.859352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.859628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.859636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.859914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.859923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.860118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.860126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.860440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.860448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.860757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.860766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.861031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.861039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.861360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.861368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.861655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.861663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.862036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.862045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.862337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.862345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.862670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.862678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.862990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.862999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.863276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.863284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.863570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.863579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.863848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.863857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.864041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.864048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.864366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.864374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.864706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.864715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.865018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.865026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.865342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.865351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.865660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.865668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.865981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.865991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.866304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.866313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.866621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.866630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.866815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.866824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.867128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.867136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.867444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.867453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.867801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.867809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.396 [2024-11-05 04:40:42.868121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.396 [2024-11-05 04:40:42.868129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.396 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.868394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.868402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.868575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.868584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.868922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.868931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.869124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.869131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.869480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.869488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.869792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.869800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.870143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.870153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.870465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.870473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.870780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.870788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.871078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.871087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.871248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.871256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.871529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.871537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.871856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.871864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.872186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.872194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.872364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.872372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.872721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.872729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.872946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.872954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.873273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.873281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.873635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.873643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.873964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.873972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.874137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.874146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.874496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.874504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.874789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.874797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.875121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.875130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.875440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.875449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.875756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.875765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.876071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.876081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.876261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.876270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.876573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.876581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.876911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.876921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.877110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.877118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.877416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.877425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.877722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.877731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.878044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.878054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.878358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.878367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.878678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.878687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.878977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.878987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.879307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.397 [2024-11-05 04:40:42.879316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.397 qpair failed and we were unable to recover it. 00:29:29.397 [2024-11-05 04:40:42.879629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.879638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.879961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.879971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.880139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.880149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.880482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.880491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.880799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.880808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.881084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.881091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.881392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.881400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.881721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.881730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.881970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.881980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.882342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.882351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.882642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.882650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.882814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.882823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.883052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.883060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.883431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.883440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.883657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.883665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.883856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.883864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.884162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.884170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.884363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.884371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.884684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.884692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.884995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.885004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.885183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.885190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.885354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.885361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.885711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.885719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.886023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.886031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.886347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.886355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.886667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.886676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.886947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.886955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.887234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.887242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.887558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.887567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.887761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.887770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.888117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.888126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.888284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.888292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.888475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.888483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.888754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.888762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.889069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.889077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.889276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.889284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.889589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.889597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.889904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.398 [2024-11-05 04:40:42.889914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.398 qpair failed and we were unable to recover it. 00:29:29.398 [2024-11-05 04:40:42.890239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.890247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.890548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.890557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.890863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.890872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.891177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.891186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.891483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.891491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.891664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.891672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.891979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.891987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.892293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.892301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.892608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.892616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.893000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.893008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.893169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.893179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.893486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.893494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.893825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.893833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.894157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.894165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.894475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.894484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.894788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.894796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.895118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.895126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.895312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.895320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.895609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.895617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.895914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.895923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.896228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.896236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.896557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.896566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.896872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.896880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.897199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.897208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.897499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.897508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.897815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.897824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.898129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.898137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.898445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.898454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.898753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.898761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.899083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.899092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.899422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.899431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.899732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.899739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.900022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.900031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.900336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.900344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.900649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.900658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.900962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.900971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.901240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.901248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.901571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.901583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.399 [2024-11-05 04:40:42.901888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.399 [2024-11-05 04:40:42.901896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.399 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.902196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.902205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.902475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.902483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.902791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.902801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.903112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.903119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.903423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.903432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.903734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.903742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.904046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.904055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.904343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.904351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.904679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.904687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.904993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.905002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.905306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.905314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.905617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.905626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.905917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.905926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.906111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.906119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.906393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.906401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.906706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.906715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.907020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.907029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.907325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.907334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.907641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.907649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.907939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.907949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.908203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.908210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.908501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.908510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.908812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.908821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.909004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.909012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.909283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.909292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.909602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.909611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.909828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.909836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.910157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.910165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.910441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.910449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.910638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.910646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.910949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.910957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.911277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.911285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.911592] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.911600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.400 qpair failed and we were unable to recover it. 00:29:29.400 [2024-11-05 04:40:42.911912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.400 [2024-11-05 04:40:42.911920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.912098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.912106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.912415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.912423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.912716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.912726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.913053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.913062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.913368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.913379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.913729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.913737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.914031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.914040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.914322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.914330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.914658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.914666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.914975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.914983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.915303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.915312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.915598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.915608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.915916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.915924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.916227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.916236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.916550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.916558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.916839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.916848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.917169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.917177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.917479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.917488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.917806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.917814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.918129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.918137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.918417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.918425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.918729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.918737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.919002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.919011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.919312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.919321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.919624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.919632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.919921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.919929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.920240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.920248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.920570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.920578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.920756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.920765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.921066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.921075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.921382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.921390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.921681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.921690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.921996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.922005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.922308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.922317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.922644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.922653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.922954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.922963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.923076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.923086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.401 [2024-11-05 04:40:42.923352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.401 [2024-11-05 04:40:42.923361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.401 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.923661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.923669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.923972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.923981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.924164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.924173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.924454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.924463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.924766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.924775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.925087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.925095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.925390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.925400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.925707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.925715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.925890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.925898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.926229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.926237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.926518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.926527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.926724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.926732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.926924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.926932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.927104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.927112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.927453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.927461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.927766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.927775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.928086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.928094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.928287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.928295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.928573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.928581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.928890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.928899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.929219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.929227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.929570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.929578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.929756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.929765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.930086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.930094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.930393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.930403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.930706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.930714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.930999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.931009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.931317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.931324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.931627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.931636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.931913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.931921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.932221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.932231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.932540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.932549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.932862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.932870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.933192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.933200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.933496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.933504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.933807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.933816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.933981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.933989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.402 [2024-11-05 04:40:42.934328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.402 [2024-11-05 04:40:42.934336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.402 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.934693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.934701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.935051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.935059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.935369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.935378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.935668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.935676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.935998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.936008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.936288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.936297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.936676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.936685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.936988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.936998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.937198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.937208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.937381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.937390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.937771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.937779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.938101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.938110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.938414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.938422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.938726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.938734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.939137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.939145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.939455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.939464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.939744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.939756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.940057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.940065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.940368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.940377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.940674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.940683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.940989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.940998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.941300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.941308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.941613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.941622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.941914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.941922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.942223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.942232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.942584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.942592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.942892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.942901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.943203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.943211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.943540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.943548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.943936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.943945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.944298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.944306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.944609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.944618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.944916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.944925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.945251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.945260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.945612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.945620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.945912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.945922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.946271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.946279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.946586] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.403 [2024-11-05 04:40:42.946595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.403 qpair failed and we were unable to recover it. 00:29:29.403 [2024-11-05 04:40:42.946942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.946951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.947254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.947263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.947621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.947629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.947931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.947940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.948257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.948265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.948579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.948587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.948896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.948904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.949225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.949233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.949558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.949566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.949741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.949755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.950071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.950081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.950384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.950394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.950700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.950708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.951020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.951029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.951253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.951261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.951430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.951438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.951751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.951760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.952061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.952069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.952357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.952365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.952666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.952675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.953063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.953071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.953358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.953367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.953691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.953700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.954016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.954026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.954324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.954333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.954654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.954663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.954952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.954961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.955265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.955274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.955576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.955585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.955891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.955901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.956190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.956199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.956505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.956514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.956796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.956806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.404 [2024-11-05 04:40:42.957121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.404 [2024-11-05 04:40:42.957130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.404 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.957421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.957430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.957738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.957751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.958017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.958025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.958328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.958336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.958628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.958637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.958912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.958920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.959127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.959134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.959449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.959457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.959789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.959806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.960124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.960132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.960457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.960465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.960651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.960659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.960961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.960969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.961273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.961282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.961590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.961597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.961902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.961912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.962219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.962230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.962552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.962561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.962859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.962868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.963130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.963138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.963463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.963471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.963813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.963822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.964107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.964116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.964428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.964437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.964733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.964742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.965068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.965077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.965403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.965412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.965716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.965726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.966011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.966021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.966337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.966347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.966544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.966553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.966815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.966823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.967148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.967157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.967460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.967468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.967771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.967780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.968093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.968102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.968387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.968404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.405 [2024-11-05 04:40:42.968712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.405 [2024-11-05 04:40:42.968720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.405 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.969001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.969011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.969314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.969322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.969667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.969676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.969985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.969994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.970289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.970298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.970607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.970616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.970796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.970806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.971116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.971124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.971432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.971441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.971743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.971756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.972036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.972044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.972230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.972238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.972527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.972535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.972858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.972867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.973182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.973190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.973493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.973502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.973833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.973841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.974160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.974168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.974470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.974481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.974789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.974799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.975479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.975496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.975782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.975791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.976070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.976078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.976382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.976391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.976696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.976704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.977000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.977009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.977300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.977309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.977623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.977632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.977913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.977921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.978230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.978238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.978514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.978523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.978841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.978850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.979165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.979172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.979478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.979486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.979780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.979789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.980112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.980120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.980428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.980436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.980738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.406 [2024-11-05 04:40:42.980750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.406 qpair failed and we were unable to recover it. 00:29:29.406 [2024-11-05 04:40:42.981024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.981032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.981215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.981224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.981413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.981421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.981661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.981669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.981970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.981978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.982283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.982291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.982590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.982599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.982914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.982922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.983219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.983227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.983526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.983534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.983839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.983849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.984134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.984142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.984430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.984439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.984744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.984755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.985028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.985045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.985337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.985345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.985671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.985680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.985992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.986001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.986306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.986313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.986618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.986626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.986923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.986934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.987243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.987251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.987553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.987561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.987915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.987924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.988230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.988239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.988441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.988448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.988808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.988817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.989151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.989160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.989446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.989454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.989770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.989780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.990173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.990181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.990487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.990496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.990787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.990795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.991090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.991099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.991298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.991307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.991609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.991619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.991918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.991926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.992248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.407 [2024-11-05 04:40:42.992256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.407 qpair failed and we were unable to recover it. 00:29:29.407 [2024-11-05 04:40:42.992549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.992558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.992835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.992843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.993185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.993194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.993370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.993379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.993591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.993601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.993927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.993936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.994211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.994220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.994536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.994544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.994862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.994871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.995204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.995212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.995514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.995523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.995812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.995821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.996032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.996039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.996191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.996200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.996484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.996492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.996801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.996811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.997029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.997037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.997352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.997361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.997666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.997674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.997994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.998003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.998309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.998318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.998605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.998614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.998849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.998859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.998961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.998968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.999250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.999259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.999588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.999595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:42.999880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:42.999888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.000220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.000228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.000569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.000577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.000881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.000890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.001114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.001122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.001410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.001418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.001750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.001759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.002063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.002071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.002295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.002303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.002513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.002521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.002826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.002834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.003198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.003207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.408 qpair failed and we were unable to recover it. 00:29:29.408 [2024-11-05 04:40:43.003517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.408 [2024-11-05 04:40:43.003525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.409 qpair failed and we were unable to recover it. 00:29:29.409 [2024-11-05 04:40:43.003722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.409 [2024-11-05 04:40:43.003730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.409 qpair failed and we were unable to recover it. 00:29:29.409 [2024-11-05 04:40:43.004077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.409 [2024-11-05 04:40:43.004085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.409 qpair failed and we were unable to recover it. 00:29:29.409 [2024-11-05 04:40:43.004399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.409 [2024-11-05 04:40:43.004407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.409 qpair failed and we were unable to recover it. 00:29:29.409 [2024-11-05 04:40:43.004622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.409 [2024-11-05 04:40:43.004629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.409 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.004955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-11-05 04:40:43.004965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.005275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-11-05 04:40:43.005285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.005486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-11-05 04:40:43.005494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.005813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-11-05 04:40:43.005822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.006133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-11-05 04:40:43.006141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.006440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-11-05 04:40:43.006448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.006647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-11-05 04:40:43.006655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.006954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-11-05 04:40:43.006962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.007295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.684 [2024-11-05 04:40:43.007303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.684 qpair failed and we were unable to recover it. 00:29:29.684 [2024-11-05 04:40:43.007630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.007638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.007885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.007893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.008235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.008243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.008490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.008498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.008803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.008811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.009106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.009115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.009414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.009422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.009725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.009735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.009961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.009970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.010327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.010335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.010538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.010548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.010868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.010877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.011192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.011201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.011402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.011410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.011695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.011712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.012031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.012040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.012331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.012340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.012655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.012664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.012965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.012974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.013285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.013294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.013489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.013498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.013638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.013647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.013860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.013870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.014181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.014190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.014478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.014487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.014679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.014687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.015025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.015035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.015216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.015226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.015429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.015438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.015630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.015638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.016041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.016050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.016361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.016370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.016677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.016686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.017068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.017077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.017384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.017393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.017575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.017586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.017884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.017892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.685 [2024-11-05 04:40:43.018218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.685 [2024-11-05 04:40:43.018227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.685 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.018539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.018548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.018824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.018832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.019021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.019029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.019335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.019345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.019519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.019528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.019802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.019810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.020146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.020154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.020459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.020467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.020774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.020782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.021100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.021108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.021395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.021403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.021609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.021617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.021916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.021926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.022247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.022255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.022545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.022555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.022863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.022871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.023175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.023184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.023489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.023498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.023787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.023796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.024110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.024118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.024414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.024423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.024730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.024738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.024950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.024958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.025280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.025288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.025596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.025605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.025920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.025928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.026241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.026250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.026555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.026563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.026769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.026777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.027088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.027097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.027391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.027399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.027706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.027714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.028016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.028025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.028324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.028333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.028621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.028630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.028932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.028940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.029110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.029118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.686 [2024-11-05 04:40:43.029468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.686 [2024-11-05 04:40:43.029476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.686 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.029759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.029768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.030074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.030082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.030386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.030395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.030704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.030712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.030989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.030998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.031299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.031306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.031611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.031620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.031916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.031924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.032240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.032249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.032550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.032559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.032867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.032876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.033206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.033214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.033506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.033515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.033825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.033833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.034054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.034064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.034373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.034381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.034713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.034721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.035001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.035009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.035316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.035325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.035636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.035645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.035957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.035966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.036265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.036275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.036577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.036586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.036893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.036901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.037196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.037205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.037516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.037525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.037719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.037727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.038005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.038013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.038312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.038329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.038651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.038659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.038912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.038921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.039250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.039258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.039583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.039592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.039891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.039899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.040159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.040167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.040477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.040485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.040773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.040781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.041188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.687 [2024-11-05 04:40:43.041196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.687 qpair failed and we were unable to recover it. 00:29:29.687 [2024-11-05 04:40:43.041493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.041500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.041811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.041820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.042157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.042166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.042470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.042479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.042776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.042784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.043110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.043118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.043425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.043433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.043757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.043766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.044086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.044094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.044400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.044409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.044709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.044717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.045026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.045035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.045348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.045357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.045659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.045668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.045968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.045976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.046286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.046294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.046599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.046609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.046913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.046923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.047215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.047223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.047539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.047548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.047851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.047859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.048165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.048173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.048444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.048452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.048619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.048628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.048850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.048858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.049174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.049182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.049477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.049484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.049785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.049793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.050109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.050118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.050429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.050438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.050723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.050732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.051047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.051056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.051360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.051370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.051676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.051685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.051996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.052006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.052309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.052317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.052638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.052646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.052956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.688 [2024-11-05 04:40:43.052965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.688 qpair failed and we were unable to recover it. 00:29:29.688 [2024-11-05 04:40:43.053251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.053258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.053562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.053571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.053877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.053885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.054205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.054214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.054527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.054544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.054818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.054828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.055159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.055167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.055468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.055476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.055778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.055786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.056109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.056117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.056424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.056431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.056745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.056756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.057089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.057098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.057400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.057408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.057684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.057693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.058014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.058023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.058319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.058327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.058631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.058639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.058828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.058837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.059146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.059154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.059438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.059446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.059723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.059732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.060089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.060097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.060407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.060416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.060697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.060706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.061012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.061022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.061329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.061338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.061635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.061644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.061952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.061962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.062266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.062275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.062579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.062588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.062791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.689 [2024-11-05 04:40:43.062801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.689 qpair failed and we were unable to recover it. 00:29:29.689 [2024-11-05 04:40:43.063089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.063097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.063404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.063413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.063699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.063707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.064013] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.064022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.064362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.064370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.064673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.064682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.064987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.064996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.065298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.065307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.065598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.065608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.065911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.065920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.066233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.066241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.066542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.066551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.066854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.066863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.067182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.067191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.067496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.067506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.067814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.067822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.068132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.068142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.068444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.068452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.068754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.068762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.069054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.069062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.069347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.069355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.069662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.069670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.069975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.069983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.070291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.070299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.070604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.070613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.070922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.070931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.071177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.071185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.071486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.071494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.071803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.071812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.072019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.072027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.072357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.072366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.072576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.072584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.072805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.072814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.073138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.073146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.073454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.073463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.073762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.073770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.074069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.074078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.074372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.690 [2024-11-05 04:40:43.074380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.690 qpair failed and we were unable to recover it. 00:29:29.690 [2024-11-05 04:40:43.074660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.074669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.074977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.074986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.075318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.075327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.075635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.075643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.075845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.075854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.076043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.076050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.076326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.076335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.076645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.076654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.076980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.076990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.077303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.077312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.077604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.077613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.077917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.077925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.078251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.078259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.078569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.078577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.078768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.078777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.079056] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.079066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.079339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.079347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.079660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.079668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.079961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.079972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.080267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.080275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.080507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.080515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.080813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.080822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.081165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.081173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.081478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.081486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.081793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.081801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.082108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.082117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.082444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.082452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.082723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.082731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.083048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.083056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.083366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.083375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.083655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.083664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.083950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.083958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.084267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.084275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.084585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.084594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.084882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.084891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.085220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.085229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.085528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.085538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.085836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.085845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.691 qpair failed and we were unable to recover it. 00:29:29.691 [2024-11-05 04:40:43.086181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.691 [2024-11-05 04:40:43.086190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.086487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.086495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.086700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.086707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.087019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.087027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.087315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.087323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.087628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.087636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.087911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.087919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.088245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.088253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.088578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.088586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.088896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.088905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.089233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.089241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.089589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.089597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.089902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.089910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.090217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.090225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.090554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.090563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.090859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.090868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.091167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.091177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.091481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.091490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.091798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.091807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.092121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.092129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.092422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.092430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.092738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.092750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.093036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.093045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.093348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.093356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.093662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.093670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.093991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.094000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.094312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.094320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.094545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.094554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.094855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.094864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.095189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.095197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.095575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.095584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.095884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.095893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.096195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.096203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.096508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.096517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.096825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.096833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.097153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.097163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.097533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.097542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.097839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.692 [2024-11-05 04:40:43.097847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.692 qpair failed and we were unable to recover it. 00:29:29.692 [2024-11-05 04:40:43.098158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.098166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.098467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.098476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.098676] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.098683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.098980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.098989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.099332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.099341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.099625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.099633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.099927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.099936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.100227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.100236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.100537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.100546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.100849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.100857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.101199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.101207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.101513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.101521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.101885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.101893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.102187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.102196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.102525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.102534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.102839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.102848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.103201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.103209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.103508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.103517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.103804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.103812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.104129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.104138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.104443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.104451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.104760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.104769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.105049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.105058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.105299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.105307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.105622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.105630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.105825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.105833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.106157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.106166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.106460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.106468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.106779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.106787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.107119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.107128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.107450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.107458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.107782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.107791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.108051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.108060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.693 [2024-11-05 04:40:43.108372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.693 [2024-11-05 04:40:43.108381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.693 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.108685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.108694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.108885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.108893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.109240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.109249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.109533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.109542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.109857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.109866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.110194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.110202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.110382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.110390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.110712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.110721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.110882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.110891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.111187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.111195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.111506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.111515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.111814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.111823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.112165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.112173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.112468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.112476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.112791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.112801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.113115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.113123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.113433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.113441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.113759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.113767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.113990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.113998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.114182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.114191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.114473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.114481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.114818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.114827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.115092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.115101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.115414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.115424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.115604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.115613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.115890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.115900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.116226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.116234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.116510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.116518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.116833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.116841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.117063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.117071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.117325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.117333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.117677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.117687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.117984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.117993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.118309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.118318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.118626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.118635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.118985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.118993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.119319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.119328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.694 [2024-11-05 04:40:43.119659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.694 [2024-11-05 04:40:43.119667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.694 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.119979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.119987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.120299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.120308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.120469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.120478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.120698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.120708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.121011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.121020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.121338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.121346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.121673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.121681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.121997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.122007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.122318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.122326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.122526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.122534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.122819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.122827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.123026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.123033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.123350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.123358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.123582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.123589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.123924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.123932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.124164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.124172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.124484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.124493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.124803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.124811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.124982] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.124990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.125314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.125323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.125569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.125577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.125898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.125907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.126239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.126248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.126556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.126563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.126925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.126933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.127260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.127269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.127583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.127591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.127788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.127798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.128143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.128152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.128512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.128520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.128831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.128839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.129122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.129129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.129452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.129461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.129648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.129656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.130001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.130010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.130310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.130318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.130626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.130634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.695 [2024-11-05 04:40:43.130848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.695 [2024-11-05 04:40:43.130856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.695 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.131175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.131183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.131479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.131487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.131808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.131816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.132091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.132099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.132431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.132440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.132732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.132741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.133020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.133028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.133200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.133208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.133536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.133545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.133741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.133755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.134079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.134087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.134420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.134429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.134738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.134751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.135035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.135045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.135343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.135351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.135684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.135693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.135885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.135895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.136205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.136213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.136492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.136508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.136719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.136727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.136922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.136930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.137254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.137261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.137581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.137590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.137902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.137910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.138076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.138084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.138329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.138338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.138508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.138516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.138784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.138792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.139096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.139104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.139187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.139196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.139478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.139485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.139559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.139567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.139855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.139864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.140045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.140053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.140344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.140352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.140669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.140678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.140871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.140880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.141055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.696 [2024-11-05 04:40:43.141063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.696 qpair failed and we were unable to recover it. 00:29:29.696 [2024-11-05 04:40:43.141349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.141358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.141688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.141697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.142035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.142045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.142347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.142356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.142666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.142675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.142991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.143001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.143183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.143192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.143501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.143510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.143796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.143804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.144137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.144146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.144441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.144449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.144766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.144776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.145084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.145093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.145419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.145427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.145743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.145756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.146069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.146078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.146418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.146428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.146757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.146765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.147031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.147040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.147366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.147375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.147675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.147685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.147897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.147906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.148222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.148231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.148538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.148546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.148861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.148870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.149193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.149201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.149509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.149518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.149773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.149782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.150113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.150121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.150420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.150429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.150779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.150788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.151095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.151105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.151296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.151304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.151614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.151623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.151930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.151938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.152248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.152256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.152538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.152546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.152777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.697 [2024-11-05 04:40:43.152785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.697 qpair failed and we were unable to recover it. 00:29:29.697 [2024-11-05 04:40:43.153109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.153117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.153423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.153432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.153736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.153744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.154045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.154054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.154359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.154367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.154666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.154675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.154995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.155003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.155322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.155331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.155640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.155649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.156023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.156032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.156284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.156292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.156597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.156605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.156859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.156867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.157178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.157187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.157376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.157385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.157678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.157687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.158000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.158008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.158215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.158222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.158501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.158510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.158808] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.158817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.159198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.159206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.159489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.159497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.159865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.159873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.160129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.160137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.160177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.160186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.160463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.160471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.160781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.160790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.161124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.161132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.161430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.161439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.161600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.161609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.161958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.161967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.162267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.162275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.162657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.162665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.162949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.698 [2024-11-05 04:40:43.162958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.698 qpair failed and we were unable to recover it. 00:29:29.698 [2024-11-05 04:40:43.163284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.163291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.163599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.163607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.163931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.163940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.164122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.164131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.164433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.164442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.164756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.164765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.165066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.165074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.165310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.165318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.165495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.165503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.165825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.165833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.166163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.166171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.166330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.166340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.166502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.166510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.166862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.166871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.167243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.167251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.167560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.167569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.167890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.167899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.168234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.168243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.168419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.168428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.168757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.168765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.169061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.169069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.169398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.169406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.169734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.169743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.170078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.170087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.170402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.170411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.170721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.170730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.171007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.171016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.171239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.171247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.171368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.171375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.171681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.171690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.171989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.171998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.172308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.172317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.172503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.172512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.172833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.172841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.173151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.173160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.173467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.173475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.173828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.173838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.174047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.699 [2024-11-05 04:40:43.174055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.699 qpair failed and we were unable to recover it. 00:29:29.699 [2024-11-05 04:40:43.174393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.174401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.174703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.174714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.175041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.175050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.175356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.175365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.175650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.175658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.175963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.175972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.176291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.176299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.176604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.176613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.176920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.176929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.177270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.177279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.177622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.177631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.177932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.177940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.178324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.178331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.178625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.178633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.178916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.178924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.179180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.179188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.179505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.179514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.179814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.179822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.180088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.180096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.180413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.180421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.180707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.180716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.181020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.181028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.181334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.181343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.181532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.181540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.181859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.181868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.182073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.182081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.182454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.182462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.182649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.182657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.182874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.182884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.183243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.183251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.183558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.183567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.183882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.183891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.184232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.184240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.184562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.184570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.184890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.184899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.185193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.185202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.185493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.185501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.185791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.700 [2024-11-05 04:40:43.185800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.700 qpair failed and we were unable to recover it. 00:29:29.700 [2024-11-05 04:40:43.186106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.186114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.186448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.186456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.186783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.186792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.187124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.187134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.187419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.187428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.187728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.187737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.188045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.188054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.188426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.188434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.188755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.188763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.189113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.189121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.189441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.189449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.189763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.189772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.189927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.189935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.190279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.190287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.190622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.190630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.190931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.190940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.191275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.191284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.191615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.191623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.191921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.191929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.192220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.192229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.192534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.192543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.192850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.192858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.193034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.193042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.193226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.193234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.193456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.193464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.193768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.193784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.194080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.194088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.194432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.194440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.194738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.194751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.195075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.195083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.195406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.195415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.195718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.195726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.196043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.196052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.196393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.196401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.196689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.196698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.197022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.197030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.197340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.197349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.197656] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.197664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.701 qpair failed and we were unable to recover it. 00:29:29.701 [2024-11-05 04:40:43.197969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.701 [2024-11-05 04:40:43.197978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.198290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.198298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.198607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.198616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.198799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.198808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.199118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.199126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.199425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.199436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.199754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.199764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.200080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.200088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.200434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.200442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.200752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.200761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.200949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.200958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.201273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.201282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.201534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.201542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.201822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.201831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.202145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.202153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.202463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.202472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.202832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.202840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.203164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.203172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.203524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.203532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.203819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.203827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.204111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.204120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.204443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.204452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.204639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.204647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.204911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.204919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.205224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.205232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.205578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.205587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.205892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.205900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.206120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.206127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.206448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.206456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.206622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.206631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.206929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.206937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.207270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.207278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.207582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.207592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.207897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.702 [2024-11-05 04:40:43.207905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.702 qpair failed and we were unable to recover it. 00:29:29.702 [2024-11-05 04:40:43.208258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.208266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.208579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.208588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.208874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.208883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.209268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.209276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.209583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.209592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.209907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.209916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.210254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.210262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.210476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.210484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.210847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.210856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.211162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.211171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.211452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.211460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.211708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.211716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.211796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.211805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.212085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.212093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.212394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.212404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.212680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.212688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.212992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.213001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.213310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.213318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.213631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.213639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.213970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.213978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.214283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.214293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.214479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.214487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.214855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.214864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.215170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.215178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.215386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.215395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.215705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.215714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.216025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.216033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.216323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.216332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.216667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.216676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.217049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.217059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.217241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.217251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.217544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.217554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.217865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.217873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.218199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.218207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.218549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.218558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.218846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.218854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.219171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.219179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.219487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.219495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.703 qpair failed and we were unable to recover it. 00:29:29.703 [2024-11-05 04:40:43.219800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.703 [2024-11-05 04:40:43.219811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.220137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.220145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.220504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.220512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.220809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.220817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.221171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.221178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.221458] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.221467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.221764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.221772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.222083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.222092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.222399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.222407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.222729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.222738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.223043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.223051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.223353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.223363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.223667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.223676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.224026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.224034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.224415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.224424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.224728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.224737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.224926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.224937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.225244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.225253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.225536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.225545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.225732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.225740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.226033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.226043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.226335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.226345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.226647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.226656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.226855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.226865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.227172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.227182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.227364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.227373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.227635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.227644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.227973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.227982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.228288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.228298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.228672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.228681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.228983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.228993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.229296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.229305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.229611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.229620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.229926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.229935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.230239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.230249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.230552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.230561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.230874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.230883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.231196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.231205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.704 qpair failed and we were unable to recover it. 00:29:29.704 [2024-11-05 04:40:43.231514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.704 [2024-11-05 04:40:43.231523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.231826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.231835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.232134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.232143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.232445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.232454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.232769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.232777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.233096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.233105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.233408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.233416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.233703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.233712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.233891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.233898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.234176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.234184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.234369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.234378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.234581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.234589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.234916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.234924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.235239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.235247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.235520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.235528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.235826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.235836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.236157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.236165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.236473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.236482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.236783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.236792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.236997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.237005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.237316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.237323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.237630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.237639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.237825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.237834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.238164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.238172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.238482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.238491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.238805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.238814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.239117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.239126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.239437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.239446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.239633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.239640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.239906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.239914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.240241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.240249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.240544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.240552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.240877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.240886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.241211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.241219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.241530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.241539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.241774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.241783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.242076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.242084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.242404] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.242412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.705 [2024-11-05 04:40:43.242609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.705 [2024-11-05 04:40:43.242616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.705 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.242921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.242930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.243262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.243270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.243580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.243588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.243905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.243917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.244172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.244180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.244347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.244356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.244673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.244680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.244983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.244992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.245184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.245192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.245528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.245536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.245865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.245873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.246204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.246212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.246512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.246520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.246819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.246827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.247132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.247140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.247445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.247454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.247783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.247792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.248099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.248107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.248414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.248422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.248734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.248741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.248961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.248969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.249269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.249276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.249590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.249598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.249796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.249805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.250133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.250141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.250434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.250441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.250753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.250762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.251141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.251149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.251455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.251464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.251773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.251781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.252101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.252109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.252415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.252423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.252616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.252624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.252930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.252939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.253258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.253266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.253583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.253592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.253895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.253903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.254209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.706 [2024-11-05 04:40:43.254217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.706 qpair failed and we were unable to recover it. 00:29:29.706 [2024-11-05 04:40:43.254524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.254532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.254865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.254874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.255214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.255223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.255532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.255540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.255861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.255870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.256170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.256182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.256500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.256509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.256813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.256821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.257011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.257019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.257324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.257332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.257626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.257633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.257839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.257847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.258167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.258175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.258472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.258481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.258798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.258807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.259148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.259156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.259476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.259485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.259774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.259783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.260090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.260098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.260422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.260430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.260645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.260653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.260761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.260769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.261069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.261078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.261261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.261269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.261591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.261599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.261922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.261930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.262289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.262298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.262495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.262502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.262689] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.262698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.262873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.262883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.263083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.263091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.263360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.263368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.263667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.263675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.707 [2024-11-05 04:40:43.263981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.707 [2024-11-05 04:40:43.263990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.707 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.264304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.264312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.264620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.264629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.264909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.264917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.265242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.265251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.265556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.265565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.265735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.265744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.266037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.266046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.266352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.266361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.266665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.266674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.266880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.266889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.267180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.267189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.267482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.267493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.267812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.267820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.268108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.268116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.268426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.268433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.268752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.268761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.269052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.269060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.269366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.269375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.269698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.269706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.269984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.269993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.270313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.270321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.270643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.270651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.270954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.270962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.271255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.271262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.271562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.271571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.271914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.271922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.272201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.272210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.272518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.272526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.272751] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.272759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.273053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.273061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.273382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.273390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.273700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.273708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.274010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.274019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.274317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.274326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.274593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.274602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.274916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.274924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.275205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.275214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.275543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.708 [2024-11-05 04:40:43.275551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.708 qpair failed and we were unable to recover it. 00:29:29.708 [2024-11-05 04:40:43.275844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.275853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.276173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.276180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.276536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.276544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.276872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.276881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.277255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.277263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.277562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.277570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.277826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.277834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.278112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.278120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.278508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.278516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.278800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.278809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.279104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.279112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.279420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.279428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.279743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.279754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.280096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.280106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.280280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.280288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.280523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.280531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.280839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.280848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.281159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.281168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.281456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.281464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.281762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.281771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.282135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.282143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.282438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.282447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.282764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.282773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.283053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.283062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.283408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.283416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.283717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.283726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.283901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.283911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.284183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.284191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.284397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.284404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.284715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.284723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.284945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.284953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.285250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.285258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.285576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.285584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.285896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.285905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.286221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.286229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.286575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.286584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.286772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.286782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.287093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.709 [2024-11-05 04:40:43.287101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.709 qpair failed and we were unable to recover it. 00:29:29.709 [2024-11-05 04:40:43.287379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.287387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.287663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.287670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.288016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.288025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.288317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.288325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.288697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.288706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.289021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.289030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.289345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.289353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.289677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.289685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.290069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.290077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.290375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.290383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.290585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.290593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.290926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.290935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.291250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.291257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.291329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.291336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.291529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.291537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.291959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.291970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.292352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.292360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.292557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.292565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.292911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.292919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.293107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.293116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.293313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.293322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.293639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.293648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.293972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.293981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.294292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.294301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.294720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.294729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.294942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.294952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.295170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.295180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.295361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.295369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.295696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.295705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.295888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.295897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.296229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.296238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.296563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.296572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.296908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.296917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.297248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.297256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.297576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.297584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.297749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.297758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.298018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.710 [2024-11-05 04:40:43.298026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.710 qpair failed and we were unable to recover it. 00:29:29.710 [2024-11-05 04:40:43.298227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.298235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.298535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.298543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.298776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.298785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.299102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.299111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.299293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.299302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.299626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.299635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.299958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.299966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.300312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.300320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.300631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.300640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.300975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.300984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.301291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.301300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.301583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.301592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.301906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.301915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.302248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.302256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.302566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.302574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.302873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.302882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.303206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.303214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.303551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.303559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.303877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.303887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.304232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.304240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.304507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.304515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.304821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.304830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.305153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.305162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.305466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.305474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.305796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.305805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.306031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.306039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.306355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.306363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.306627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.306635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.306826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.306835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.307180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.307188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.307400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.307407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.307790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.307798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.308010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.308017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.711 [2024-11-05 04:40:43.308247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.711 [2024-11-05 04:40:43.308257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.711 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.308561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.308571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.308878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.308888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.309197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.309205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.309517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.309525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.309866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.309874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.310176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.310184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.310368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.310376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.310557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.310564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.310900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.310910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.311217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.311226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.311549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.311559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.311873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.311882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.312060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.312068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.312385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.312395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.312708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.312717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.313048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.994 [2024-11-05 04:40:43.313056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.994 qpair failed and we were unable to recover it. 00:29:29.994 [2024-11-05 04:40:43.313388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.313397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.313697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.313705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.314011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.314021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.314309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.314318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.314564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.314573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.314822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.314837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.315214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.315222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.315554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.315563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.315877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.315890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.316109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.316117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.316379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.316387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.316568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.316576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.316865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.316873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.317184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.317193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.317475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.317483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.317799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.317808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.318133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.318142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.318456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.318465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.318831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.318840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.319027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.319035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.319342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.319350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.319658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.319667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.319990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.319998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.320317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.320325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.320510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.320518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.320827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.320835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.321169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.321177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.321504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.321512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.321776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.321784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.322150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.322158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.322380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.322388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.322705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.322713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.322940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.322949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.323111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.323120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.323449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.323457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.323754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.323763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.324100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.324108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.324336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.995 [2024-11-05 04:40:43.324344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.995 qpair failed and we were unable to recover it. 00:29:29.995 [2024-11-05 04:40:43.324530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.324539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.324910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.324918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.325209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.325218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.325448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.325456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.325745] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.325761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.326121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.326129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.326449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.326457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.326761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.326770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.326951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.326960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.327252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.327261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.327560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.327569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.327767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.327775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.328124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.328133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.328444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.328452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.328834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.328843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.329216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.329225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.329531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.329540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.329848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.329856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.330152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.330161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.330399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.330406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.330730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.330738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.331118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.331128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.331283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.331293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.331564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.331573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.331909] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.331919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.332242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.332250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.332551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.332560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.332919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.332927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.333254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.333262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.333570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.333577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.333947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.333955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.334287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.334295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.334575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.334582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.334769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.334778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.335104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.335113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.335477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.335486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.335800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.335809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.336111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.336120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.996 qpair failed and we were unable to recover it. 00:29:29.996 [2024-11-05 04:40:43.336425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.996 [2024-11-05 04:40:43.336434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.336617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.336625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.336846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.336854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.337151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.337160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.337460] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.337469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.337755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.337763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.338072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.338081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.338399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.338407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.338700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.338709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.339033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.339041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.339361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.339370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.339577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.339586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.339846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.339857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.340146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.340155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.340337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.340345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.340560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.340569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.340852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.340860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.341251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.341259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.341549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.341557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.341774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.341782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.342002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.342010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.342340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.342348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.342532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.342540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.342842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.342851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.343182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.343191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.343508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.343516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.343723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.343732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.344060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.344070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.344376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.344385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.344560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.344569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.344742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.344755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.344963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.344971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.345300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.345309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.345624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.345632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.345872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.345881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.346077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.346085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.346264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.346271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.346442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.997 [2024-11-05 04:40:43.346449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.997 qpair failed and we were unable to recover it. 00:29:29.997 [2024-11-05 04:40:43.346765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.346773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.347122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.347131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.347453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.347462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.347767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.347775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.348097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.348105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.348319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.348327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.348652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.348660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.348861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.348869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.349145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.349153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.349463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.349472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.349775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.349784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.349860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.349867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.350074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.350082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.350392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.350400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.350716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.350727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.351049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.351058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.351353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.351362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.351544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.351552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.351727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.351735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.351924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.351933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.352231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.352240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.352545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.352553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.352878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.352887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.353204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.353214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.353387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.353395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.353719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.353727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.354042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.354051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.354345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.354354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.354520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.354528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.354708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.354715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.354907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.354915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.355240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.355248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.355552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.355560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.355740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.355750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.356052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.356061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.356259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.356267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.356595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.356604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.356849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.998 [2024-11-05 04:40:43.356857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.998 qpair failed and we were unable to recover it. 00:29:29.998 [2024-11-05 04:40:43.356918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.356924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.357116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.357123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.357452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.357460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.357584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.357591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.357793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.357803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.358010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.358019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.358315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.358323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.358514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.358522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.358704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.358713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.359036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.359044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.359352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.359362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.359666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.359675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.359966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.359974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.360278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.360287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.360552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.360560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.360916] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.360924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.361239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.361248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.361574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.361582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.361922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.361932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.362234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.362242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.362563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.362571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.362705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.362711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.363040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.363050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.363398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.363406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.363715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.363724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.363940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.363949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.364261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.364270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.364581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.364588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.364643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.364650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.364967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.364975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.365279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.365287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.365597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.365606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:29.999 qpair failed and we were unable to recover it. 00:29:29.999 [2024-11-05 04:40:43.365806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:29.999 [2024-11-05 04:40:43.365814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.366097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.366105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.366264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.366272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.366499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.366506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.366679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.366687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.366973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.366983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.367284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.367292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.367633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.367642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.367855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.367863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.368085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.368093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.368363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.368371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.368704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.368714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.369018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.369027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.369344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.369353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.369706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.369715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.369891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.369901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.370214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.370223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.370563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.370573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.371060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.371068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.371378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.371387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.371777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.371786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.371986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.371994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.372194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.372203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.372518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.372525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.372707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.372714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.373118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.373127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.373436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.373445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.373644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.373652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.373968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.373976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.374295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.374303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.374651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.374659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.374729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.374736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.375060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.375069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.375133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.375141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.375467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.375476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.375858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.375866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.376154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.376162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.376491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.376498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.000 qpair failed and we were unable to recover it. 00:29:30.000 [2024-11-05 04:40:43.376720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.000 [2024-11-05 04:40:43.376728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.376998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.377007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.377309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.377318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.377637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.377645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.378090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.378099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.378406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.378414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.378690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.378699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.378897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.378906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.379006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.379013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.379269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.379277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.379622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.379630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.379971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.379981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.380316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.380326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.380489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.380500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.380869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.380878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.381199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.381207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.381398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.381407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.381727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.381736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.381852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.381860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Read completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 Write completed with error (sct=0, sc=8) 00:29:30.001 starting I/O failed 00:29:30.001 [2024-11-05 04:40:43.382606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:30.001 [2024-11-05 04:40:43.383169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.383274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.383724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.383779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.384149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.384158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.384441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.384450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.384782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.384790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.384897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.384904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.385087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.385095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.385283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.385292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.385497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.001 [2024-11-05 04:40:43.385505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.001 qpair failed and we were unable to recover it. 00:29:30.001 [2024-11-05 04:40:43.385708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.385715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.385876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.385885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.386203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.386211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.386513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.386522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.386861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.386870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.387187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.387195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.387362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.387371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.387563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.387571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.387641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.387648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.387777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.387785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.388008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.388016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.388306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.388315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.388512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.388521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.388693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.388702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.389063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.389071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.389376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.389385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.389694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.389702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.390031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.390041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.390345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.390355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.390544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.390553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.390856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.390865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.391105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.391113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.391319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.391327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.391477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.391485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.391717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.391725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.391973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.391981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.392287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.392296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.392590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.392598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.392830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.392838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.393115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.393124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.393433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.393443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.393772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.393780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.394095] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.394104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.394410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.394418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.394669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.394677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.394869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.394878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.395200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.395209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.395496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.002 [2024-11-05 04:40:43.395504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.002 qpair failed and we were unable to recover it. 00:29:30.002 [2024-11-05 04:40:43.395824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.395834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.396304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.396312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.396620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.396628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.396797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.396806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.396987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.396996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.397200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.397208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.397512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.397521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.397823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.397832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.398089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.398097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.398414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.398421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.398730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.398739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.398964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.398972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.399286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.399295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.399620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.399628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.399968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.399977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.400286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.400294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.400639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.400648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.400952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.400960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.401302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.401310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.401646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.401654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.401977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.401987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.402311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.402319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.402627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.402634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.402863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.402872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.403146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.403154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.403439] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.403448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.403661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.403669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.403981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.403990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.404300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.404309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.404515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.404524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.404807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.404815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.405006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.405014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.405203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.003 [2024-11-05 04:40:43.405210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.003 qpair failed and we were unable to recover it. 00:29:30.003 [2024-11-05 04:40:43.405521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.405529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.405838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.405848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.406198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.406206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.406493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.406502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.406712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.406720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.407005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.407013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.407316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.407324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.407528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.407535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.407828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.407837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.408158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.408166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.408356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.408364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.408698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.408707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.408940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.408948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.409125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.409133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.409348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.409356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.409639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.409647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.409736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.409743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.410063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.410072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.410384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.410393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.410589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.410597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.410890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.410898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.411109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.411117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.411415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.411424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.411732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.411741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.412054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.412063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.412347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.412356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.412534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.412542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.412868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.412879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.413200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.413208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.413509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.413517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.413812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.413820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.414140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.414149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.414356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.414364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.414679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.414688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.414991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.414999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.415325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.415334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.415620] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.415628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.415913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.415921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.004 qpair failed and we were unable to recover it. 00:29:30.004 [2024-11-05 04:40:43.416114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.004 [2024-11-05 04:40:43.416122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.416322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.416331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.416653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.416661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.416744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.416755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.416854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.416862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.417204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.417213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.417542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.417550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.417870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.417879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.418192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.418199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.418409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.418417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.418707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.418716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.418956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.418965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.419171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.419178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.419465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.419474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.419787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.419796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.420012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.420020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.420216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.420224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.420546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.420555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.420866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.420875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.421208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.421216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.421426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.421434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.421621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.421629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.421923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.421931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.422133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.422141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.422423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.422431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.422743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.422755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.423060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.423069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.423389] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.423399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.423573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.423582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.423845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.423855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.424085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.424093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.424300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.424308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.424612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.424620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.424893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.424901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.425222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.425230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.425542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.425550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.425639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.425647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.425952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.425960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.005 [2024-11-05 04:40:43.426289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.005 [2024-11-05 04:40:43.426298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.005 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.426608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.426617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.426920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.426929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.427257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.427265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.427596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.427606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.427696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.427704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.428327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.428419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.429045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.429137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.429560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.429599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.429766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.429798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.430143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.430174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.430515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.430545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.431071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.431162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.431491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.431530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.431888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.431920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.432268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.432298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.432547] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.432576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.432820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.432852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.433174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.433206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.433556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.433585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.433843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.433873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.434213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.434242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.434624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.434654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.435045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.435076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.435388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.435418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.435790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.435820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.436189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.436219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.436562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.436592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.437033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.437064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.437419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.437448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.437829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.437860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.438233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.438269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.438507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.438544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.438984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.439016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.439406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.439436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.439739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.439779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.440024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.440054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.440421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.440450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.440833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.006 [2024-11-05 04:40:43.440864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.006 qpair failed and we were unable to recover it. 00:29:30.006 [2024-11-05 04:40:43.441288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.441319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.441558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.441590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.441843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.441875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.442236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.442266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.442599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.442628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.442998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.443030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.443371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.443402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.443766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.443797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.444166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.444196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.444627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.444656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.444996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.445027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.445376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.445407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.445759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.445790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.446183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.446213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.446578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.446608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.446959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.446991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.447293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.447323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.447627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.447657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.448040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.448072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.448317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.448352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.448588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.448616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.449024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.449056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.449413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.449443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.449791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.449821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.450170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.450199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.450560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.450591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.450937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.450969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.451336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.451366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.451705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.451734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.452096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.452126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.452455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.452486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.452840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.452870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.453235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.453272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.453611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.453641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.453974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.454004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.454406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.454435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.454762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.454793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.454980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.455012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.007 qpair failed and we were unable to recover it. 00:29:30.007 [2024-11-05 04:40:43.455376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.007 [2024-11-05 04:40:43.455406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.455759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.455790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.456135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.456165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.456507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.456537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.456882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.456915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.457299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.457328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.457657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.457687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.458051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.458080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.458410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.458442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.458790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.458823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.459185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.459216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.459555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.459585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.459919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.459949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.460308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.460338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.460678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.460707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.461128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.461160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.461499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.461529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.461769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.461798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.462161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.462190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.462528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.462558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.462897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.462929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.463282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.463313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.463636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.463666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.464003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.464035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.464388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.464418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.464650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.464680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.465046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.465077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.465432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.465462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.465786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.465817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.466127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.466156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.466486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.466516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.466862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.466893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.467249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.467278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.467629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.008 [2024-11-05 04:40:43.467660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.008 qpair failed and we were unable to recover it. 00:29:30.008 [2024-11-05 04:40:43.468010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.468047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.468385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.468416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.468770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.468801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.469165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.469192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.469536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.469564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.469913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.469942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.470262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.470289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.470650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.470678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.470992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.471021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.471365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.471393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.471724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.471761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.472111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.472138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.472548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.472577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.472920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.472950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.473279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.473308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.473661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.473690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.474037] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.474068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.474383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.474413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.474782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.474815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.475177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.475207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.475565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.475594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.475936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.475967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.476325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.476355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.476782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.476813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.477174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.477204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.477465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.477495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.477759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.477790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.478151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.478187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.478526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.478556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.478813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.478845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.479164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.479194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.479544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.479574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.479847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.479879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.480245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.480276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.480613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.480643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.481015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.481046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.481286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.481315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.481469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.009 [2024-11-05 04:40:43.481500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.009 qpair failed and we were unable to recover it. 00:29:30.009 [2024-11-05 04:40:43.481738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.481774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.482042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.482072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.482398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.482429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.482667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.482696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.482947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.482980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.483331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.483361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.483603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.483634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.483996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.484028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.484407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.484436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.484796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.484827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.485115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.485145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.485476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.485506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.485753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.485786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.486130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.486160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.486408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.486438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.486687] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.486717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.487106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.487137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.487489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.487520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.487773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.487805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.488122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.488152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.488471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.488501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.488730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.488768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.488995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.489025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.489360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.489389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.489744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.489782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.490030] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.490060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.490280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.490310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.490674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.490704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.491059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.491090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.491474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.491510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.491861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.491894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.492100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.492132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.492448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.492478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.492762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.492793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.493143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.493173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.493523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.493552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.493803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.493833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.494195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.010 [2024-11-05 04:40:43.494225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.010 qpair failed and we were unable to recover it. 00:29:30.010 [2024-11-05 04:40:43.494551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.494581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.494998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.495028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.495277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.495306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.495453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.495482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.495819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.495851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.496213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.496243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.496483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.496512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.496812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.496842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.497219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.497249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.497580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.497610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.497831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.497861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.498226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.498255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.498612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.498641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.498897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.498927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.499282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.499312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.499679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.499708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.499996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.500028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.500410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.500440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.500668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.500697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.501089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.501120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.501463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.501493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.501826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.501858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.502200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.502230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.502574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.502603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.502971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.503002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.503276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.503304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.503535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.503566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.503842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.503873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.504137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.504166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.504506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.504536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.504872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.504901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.505256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.505292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.505631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.505661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.506053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.506084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.506459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.506488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.506849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.506880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.507280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.507310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.507578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.011 [2024-11-05 04:40:43.507608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.011 qpair failed and we were unable to recover it. 00:29:30.011 [2024-11-05 04:40:43.507856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.507888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.508261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.508290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.508647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.508677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.508850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.508879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.509244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.509273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.509630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.509659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.509853] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.509883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.510232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.510263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.510616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.510646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.511019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.511049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.511382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.511413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.511760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.511792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.512235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.512264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.512613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.512643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.513005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.513036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.513381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.513410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.513641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.513669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.514045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.514076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.514432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.514462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.514829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.514860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.515211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.515241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.515584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.515614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.515848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.515879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.516246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.516276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.516621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.516651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.517054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.517085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.517425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.517455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.517816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.517848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.518208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.518238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.518489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.518518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.518922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.518954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.519181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.519210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.519330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.012 [2024-11-05 04:40:43.519357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.012 qpair failed and we were unable to recover it. 00:29:30.012 [2024-11-05 04:40:43.519710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.519766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.520242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.520272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.520613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.520642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.520886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.520917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.521272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.521301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.521694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.521723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.522061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.522091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.522455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.522484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.522806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.522857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.523218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.523247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.523593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.523622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.523999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.524030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.524377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.524406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.524757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.524787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.525152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.525182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.525534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.525563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.525829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.525859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.526211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.526241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.526597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.526626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.527043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.527073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.527452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.527481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.527825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.527855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.528207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.528237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.528582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.528613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.528971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.529001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.529360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.529389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.529672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.529701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.530117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.530148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.530476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.530505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.530781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.530812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.531167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.531197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.531566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.531595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.531989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.532019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.532426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.532455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.532827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.532857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.533216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.533245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.533487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.533516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.533964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.013 [2024-11-05 04:40:43.533993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.013 qpair failed and we were unable to recover it. 00:29:30.013 [2024-11-05 04:40:43.534335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.534365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.534722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.534760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.535133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.535171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.535577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.535608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.535946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.535978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.536321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.536349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.536736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.536772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.537131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.537161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.537394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.537423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.537786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.537816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.538167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.538197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.538549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.538578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.538924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.538955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.539356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.539386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.539795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.539826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.540184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.540214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.540452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.540482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.540717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.540763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.541158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.541189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.541527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.541556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.541805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.541835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.542233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.542262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.542599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.542628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.543010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.543041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.543399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.543428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.543839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.543870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.544234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.544264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.544594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.544623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.544864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.544894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.545222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.545252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.545544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.545573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.545820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.545850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.546091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.546124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.546446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.546475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.546813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.546844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.547191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.547222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.547559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.547589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.547953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.547984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.014 qpair failed and we were unable to recover it. 00:29:30.014 [2024-11-05 04:40:43.548328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.014 [2024-11-05 04:40:43.548358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.548597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.548625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.548869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.548900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.549260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.549290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.549644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.549680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.550045] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.550076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.550424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.550453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.550788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.550818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.551185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.551214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.551542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.551572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.551928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.551960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.552346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.552375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.552600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.552632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.553004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.553035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.553367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.553397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.553767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.553798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.554155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.554184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.554537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.554567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.554932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.554963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.555321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.555351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.555716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.555757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.556027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.556060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.556395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.556425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.556793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.556824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.557198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.557227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.557569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.557601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.557981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.558011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.558333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.558362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.558631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.558663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.559093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.559124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.559362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.559390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.559754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.559785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.560166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.560196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.560601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.560631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.560874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.560905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.561234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.561264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.561644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.561673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.562072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.015 [2024-11-05 04:40:43.562103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.015 qpair failed and we were unable to recover it. 00:29:30.015 [2024-11-05 04:40:43.562450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.562480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.562814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.562844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.563099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.563127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.563503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.563533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.563895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.563925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.564279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.564309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.564647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.564683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.564972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.565003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.565360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.565390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.565757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.565789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.566121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.566151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.566546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.566575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.566716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.566758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.567068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.567098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.567454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.567484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.567825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.567856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.568206] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.568237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.568588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.568617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.568975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.569007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.569361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.569390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.569714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.569744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.570085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.570115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.570447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.570478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.570781] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.570813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.571160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.571190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.571516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.571546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.571953] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.571983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.572327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.572357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.572711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.572740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.573106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.573136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.573446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.573475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.573784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.573814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.574090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.574119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.574501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.574532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.574794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.574824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.575155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.575184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.575455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.575483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.575792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.575823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.016 [2024-11-05 04:40:43.576200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.016 [2024-11-05 04:40:43.576229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.016 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.576554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.576583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.576930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.576960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.577320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.577349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.577712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.577741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.578118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.578149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.578497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.578529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.578784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.578816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.579156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.579192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.579425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.579455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.579788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.579819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.580183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.580212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.580564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.580594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.580831] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.580864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.581227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.581257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.581424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.581453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.581786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.581816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.582210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.582240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.582597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.582626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.582996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.583027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.583383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.583413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.583741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.583778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.584145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.584175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.584412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.584442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.584880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.584911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.585274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.585304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.585654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.585683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.586032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.586063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.586304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.586333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.586569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.586598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.586959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.586991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.587348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.587377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.587776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.587807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.588173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.017 [2024-11-05 04:40:43.588203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.017 qpair failed and we were unable to recover it. 00:29:30.017 [2024-11-05 04:40:43.588551] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.588580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.588972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.589004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.589144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.589172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.589595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.589625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.589997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.590030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.590327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.590357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.590594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.590624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.591034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.591065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.591311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.591340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.591680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.591710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.592060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.592091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.592440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.592471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.592824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.592855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.593232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.593262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.593492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.593527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.593865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.593897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.594165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.594195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.594432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.594461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.594829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.594862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.595204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.595234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.595572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.595601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.595829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.595859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.596237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.596267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.596498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.596527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.596896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.596927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.597297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.597327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.597682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.597710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.598005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.598035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.598407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.598438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.598801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.598833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.599198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.599228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.599492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.599522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.599763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.599794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.600188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.600217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.600548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.600578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.600815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.600846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.601218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.601248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.601502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.601534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.018 qpair failed and we were unable to recover it. 00:29:30.018 [2024-11-05 04:40:43.601886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.018 [2024-11-05 04:40:43.601916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.602237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.602267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.602608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.602638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.602979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.603011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.603378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.603408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.603761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.603792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.604160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.604188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.604543] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.604572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.604828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.604859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.605219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.605248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.605598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.605629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.605969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.606002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.606241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.606270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.606618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.606649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.607008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.607040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.607387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.607417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.607801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.607839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.608200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.608229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.608626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.608654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.608989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.609020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.609168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.609198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.609542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.609571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.610023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.610053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.610466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.610496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.610864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.610895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.611251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.611279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.019 [2024-11-05 04:40:43.611633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.019 [2024-11-05 04:40:43.611662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.019 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.611999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.612031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.612277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.612307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.612545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.612577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.612824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.612855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.613215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.613244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.613577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.613606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.613958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.613988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.614355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.614384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.614766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.614796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.615061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.615090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.615428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.615458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.615804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.615835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.616214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.616243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.616643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.616672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.616891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.616924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.617203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.617233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.617504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.617533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.617898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.617930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.618145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.618174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.618535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.618564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.618871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.618903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.619241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.619270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.619633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.619663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.620032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.620063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.620463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.620493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.620865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.620895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.621118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.621150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.621482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.621512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.621863] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.621894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.622270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.622306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.622638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.622667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.623003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.623032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.623381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.623411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.623652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.623681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.316 [2024-11-05 04:40:43.624111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.316 [2024-11-05 04:40:43.624144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.316 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.624493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.624523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.624865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.624895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.625242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.625272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.625633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.625662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.626084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.626115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.626463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.626492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.626880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.626910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.627282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.627311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.627666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.627695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.628031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.628061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.628426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.628456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.628783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.628815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.629189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.629219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.629593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.629622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.630007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.630037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.630280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.630309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.630544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.630576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.630798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.630827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.631209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.631238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.631572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.631601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.631849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.631879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.632256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.632286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.632589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.632619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.632978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.633009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.633357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.633387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.633761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.633792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.634174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.634204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.634534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.634564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.634832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.634863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.635207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.635236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.635561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.635590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.635934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.635964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.636328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.636357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.636686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.636716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.637093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.637130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.637398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.637427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.637792] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.637824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.638171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.317 [2024-11-05 04:40:43.638200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.317 qpair failed and we were unable to recover it. 00:29:30.317 [2024-11-05 04:40:43.638541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.638570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.638907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.638937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.639282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.639312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.639638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.639667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.640046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.640077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.640412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.640440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.640690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.640719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.641073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.641103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.641465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.641495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.641860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.641891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.642283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.642313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.642658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.642688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.643039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.643070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.643397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.643426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.643763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.643794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.644199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.644228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.644475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.644504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.644744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.644784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.645038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.645067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.645463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.645493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.645854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.645887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.646251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.646280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.646637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.646667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.647007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.647039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.647380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.647409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.647731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.647781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.648143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.648173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.648518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.648548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.648885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.648917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.649182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.649211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.649555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.649584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.650025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.650057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.650391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.650420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.650761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.650793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.651121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.651151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.651288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.651320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.651644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.651681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.652005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.652037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.318 qpair failed and we were unable to recover it. 00:29:30.318 [2024-11-05 04:40:43.652381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.318 [2024-11-05 04:40:43.652411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.652726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.652764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.653046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.653076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.653418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.653447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.653779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.653810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.654171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.654201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.654556] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.654585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.654835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.654865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.655232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.655263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.655486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.655516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.655891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.655922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.656248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.656277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.656608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.656637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.656973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.657004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.657352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.657381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.657717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.657756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.658148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.658178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.658422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.658450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.658786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.658817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.659175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.659204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.659560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.659591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.659926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.659956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.660306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.660336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.660674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.660703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.661044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.661075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.661338] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.661369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.661691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.661721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.662082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.662113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.662451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.662481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.662809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.662841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.663219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.663249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.663573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.663602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.663934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.663965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.664215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.664244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.664686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.664715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.665000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.665030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.665281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.665313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.665665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.319 [2024-11-05 04:40:43.665696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.319 qpair failed and we were unable to recover it. 00:29:30.319 [2024-11-05 04:40:43.665959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.665989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.666339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.666368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.666699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.666729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.666994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.667024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.667350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.667379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.667648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.667677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.667994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.668025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.668357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.668385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.668690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.668719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.669049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.669078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.669487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.669516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.669795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.669826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.670145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.670174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.670514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.670544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.670911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.670944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.671326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.671356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.671712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.671742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.672111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.672141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.672489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.672519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.672865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.672896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.673208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.673238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.673626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.673656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.673974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.674005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.674367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.674396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.674764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.674795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.675157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.675188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.675411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.675439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.675805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.675841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.676234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.676265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.676402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.676433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.676801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.676832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.677185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.677217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.320 [2024-11-05 04:40:43.677544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.320 [2024-11-05 04:40:43.677574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.320 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.677977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.678009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.678253] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.678282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.678646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.678674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.679042] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.679073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.679414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.679445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.679788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.679818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.680199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.680228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.680639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.680668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.681039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.681071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.681302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.681331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.681733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.681770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.682120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.682150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.682491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.682520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.682941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.682971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.683326] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.683355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.683763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.683794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.684159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.684189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.684532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.684561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.684914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.684946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.685283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.685313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.685601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.685633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.685987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.686018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.686244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.686273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.686632] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.686661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.687088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.687118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.687444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.687473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.687830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.687860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.688211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.688241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.688576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.688606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.688975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.689006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.689364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.689395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.689759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.689789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.690144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.690174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.690414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.690445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.690779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.690815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.691181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.691211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.691600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.691631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.321 [2024-11-05 04:40:43.691985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.321 [2024-11-05 04:40:43.692015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.321 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.692381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.692411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.692738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.692778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.693201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.693231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.693633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.693663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.693955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.693987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.694248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.694279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.694617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.694646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.695025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.695055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.695447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.695477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.695815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.695845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.696069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.696098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.696316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.696345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.696594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.696623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.696981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.697012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.697355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.697384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.697805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.697837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.698201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.698231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.698552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.698583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.698838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.698870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.699125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.699157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.699519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.699548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.699940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.699971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.700321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.700351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.700697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.700727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.701130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.701161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.701520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.701549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.701934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.701965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.702311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.702342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.702578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.702607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.703063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.703094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.703431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.703461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.703826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.703857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.704230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.704260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.704591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.704621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.704789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.704818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.705177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.705207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.705569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.705605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.322 [2024-11-05 04:40:43.705944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.322 [2024-11-05 04:40:43.705976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.322 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.706314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.706345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.706672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.706702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.707043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.707075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.707422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.707451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.707805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.707835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.708227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.708257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.708617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.708646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.709022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.709053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.709401] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.709432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.709815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.709846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.710196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.710226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.710555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.710586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.710929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.710960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.711414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.711444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.711819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.711851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.712202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.712232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.712591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.712620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.712836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.712866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.713090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.713120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.713493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.713523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.713764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.713798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.714187] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.714217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.714578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.714608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.714740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.714779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.715100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.715129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.715485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.715515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.715846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.715878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.716263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.716292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.716530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.716559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.716828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.716860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.717238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.717267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.717506] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.717535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.717785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.717815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.718160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.718189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.718562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.718591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.718944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.718975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 [2024-11-05 04:40:43.719339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.719369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.323 qpair failed and we were unable to recover it. 00:29:30.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3178256 Killed "${NVMF_APP[@]}" "$@" 00:29:30.323 [2024-11-05 04:40:43.719715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.323 [2024-11-05 04:40:43.719772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.720144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.720175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.720513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:30.324 [2024-11-05 04:40:43.720542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.720807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.720837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:30.324 [2024-11-05 04:40:43.721237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.721267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:30.324 [2024-11-05 04:40:43.721659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.721690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.722050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.722080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.722443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.722474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.722811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.722842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.723199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.723228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.723576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.723606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.724020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.724052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.724378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.724409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.724754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.724786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.725192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.725221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.725459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.725488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.725876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.725906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.726141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.726172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.726412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.726442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.726791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.726823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.727167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.727197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.727558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.727588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.727832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.727863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.728268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.728299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.728604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.728635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.728918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.728955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=3179208 00:29:30.324 [2024-11-05 04:40:43.729353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.729390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 3179208 00:29:30.324 [2024-11-05 04:40:43.729720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.729759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 3179208 ']' 00:29:30.324 [2024-11-05 04:40:43.730111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.730142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.324 [2024-11-05 04:40:43.730492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.730523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.324 [2024-11-05 04:40:43.730914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.730946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:30.324 04:40:43 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:30.324 [2024-11-05 04:40:43.731299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.731330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.324 qpair failed and we were unable to recover it. 00:29:30.324 [2024-11-05 04:40:43.731674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.324 [2024-11-05 04:40:43.731705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.732089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.732120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.732356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.732390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.732721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.732760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.733148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.733178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.733537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.733568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.733827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.733861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.734225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.734256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.734587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.734617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.735035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.735066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.735377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.735409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.735785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.735816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.736169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.736200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.736527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.736558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.736896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.736927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.737157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.737196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.737531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.737561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.737895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.737926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.738172] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.738202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.738454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.738487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.738828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.738859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.739168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.739198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.739540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.739570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.739860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.739891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.740225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.740254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.740492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.740523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.740838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.740870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.741237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.741267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.741495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.741525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.741754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.741786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.742134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.742164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.742319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.742350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.742685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.742716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.743080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.325 [2024-11-05 04:40:43.743112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.325 qpair failed and we were unable to recover it. 00:29:30.325 [2024-11-05 04:40:43.743479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.743509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.743806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.743838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.744203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.744233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.744549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.744579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.744945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.744976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.745306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.745337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.745694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.745723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.746173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.746204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.746559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.746589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.746857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.746888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.747306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.747337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.747685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.747716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.748199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.748232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.748469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.748499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.748832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.748865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.749222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.749252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.749608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.749637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.750032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.750064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.750395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.750424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.750799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.750829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.751196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.751226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.751433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.751469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.751838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.751870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.752110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.752140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.752372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.752402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.752650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.752680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.752925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.752957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.753278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.753308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.753574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.753604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.753980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.754010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.754343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.754373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.754628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.754661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.755009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.755041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.755367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.755397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.755635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.755665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.755911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.755942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.756183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.756213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.756580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.326 [2024-11-05 04:40:43.756610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.326 qpair failed and we were unable to recover it. 00:29:30.326 [2024-11-05 04:40:43.756952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.756984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.757223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.757253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.757539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.757568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.757812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.757842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.758228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.758258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.758508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.758537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.758809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.758840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.759079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.759112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.759443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.759473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.759707] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.759737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.760072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.760104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.760335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.760365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.760696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.760726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.761176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.761207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.761555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.761585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.761901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.761934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.762278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.762308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.762685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.762714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.763088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.763120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.763449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.763478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.763648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.763677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.763986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.764017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.764239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.764271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.764414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.764454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.764827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.764858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.765090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.765120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.765450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.765480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.765723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.765767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.766177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.766207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.766535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.766565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.766879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.766911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.767294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.767323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.767706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.767736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.768158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.768190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.768511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.768540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.768923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.768955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.769197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.769230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.769456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.769486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.327 [2024-11-05 04:40:43.769834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.327 [2024-11-05 04:40:43.769864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.327 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.770211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.770241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.770591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.770620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.770983] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.771014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.771365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.771395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.771636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.771666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.772034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.772064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.772420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.772450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.772803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.772834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.773181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.773211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.773582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.773612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.773960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.773991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.774367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.774397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.774545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.774575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.774852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.774884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.775237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.775266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.775639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.775669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.776009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.776039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.776374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.776403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.776807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.776838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.777196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.777225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.777431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.777461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.777828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.777858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.778205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.778235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.778567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.778596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.778864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.778902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.779276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.779305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.779670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.779699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.780054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.780086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.780428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.780457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.780805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.780836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.781181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.781210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.781505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.781535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.781686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.781716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.782098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.782130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.782487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.782516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.782789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.782820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.783184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.783213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.328 [2024-11-05 04:40:43.783419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.328 [2024-11-05 04:40:43.783448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.328 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.783779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.783810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.784145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.784177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.784564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.784593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.784841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.784875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.785201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.785231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.785546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.785576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.785673] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:29:30.329 [2024-11-05 04:40:43.785726] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.329 [2024-11-05 04:40:43.785927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.785957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.786319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.786349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.786703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.786734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.787112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.787143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.787510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.787541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.787887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.787917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.788283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.788313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.788645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.788675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.789066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.789096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.789450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.789480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.789835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.789867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.790185] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.790215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.790381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.790415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.790773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.790803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.791076] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.791106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.791442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.791473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.791899] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.791930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.792291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.792321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.792581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.792610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.792967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.792999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.793209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.793238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.793511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.793541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.793878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.793909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.794279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.794309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.794643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.794672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.795050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.795081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.329 [2024-11-05 04:40:43.795436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.329 [2024-11-05 04:40:43.795465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.329 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.795827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.795857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.796235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.796264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.796615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.796645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.796891] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.796923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.797186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.797216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.797553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.797589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.798018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.798050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.798424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.798454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.798672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.798701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.799074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.799106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.799437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.799467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.799846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.799878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.800266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.800296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.800618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.800648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.800862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.800893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.801278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.801308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.801668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.801698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.802043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.802075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.802413] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.802442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.802688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.802718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.803057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.803088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.803442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.803471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.803710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.803740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.803987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.804017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.804382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.804411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.804743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.804792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.805036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.805066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.805320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.805350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.805717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.805755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.806129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.806159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.806388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.806421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.806762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.806793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.807031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.807062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.807390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.807420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.807675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.807704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.808049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.808080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.808318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.808352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.808715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.330 [2024-11-05 04:40:43.808755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.330 qpair failed and we were unable to recover it. 00:29:30.330 [2024-11-05 04:40:43.808999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.809032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.809270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.809300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.809608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.809638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.809985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.810016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.810352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.810381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.810670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.810699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.810971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.811002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.811336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.811365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.811734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.811772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.812048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.812077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.812436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.812466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.812688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.812718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.813103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.813135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.813475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.813506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.813855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.813886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.814234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.814263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.814509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.814538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.814874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.814904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.815015] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.815045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.815386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.815415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.815765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.815795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.816220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.816251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.816617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.816647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.817017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.817047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.817390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.817419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.817767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.817797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.818153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.818182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.818509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.818540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.818871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.818903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.819227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.819256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.819473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.819503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.819782] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.819814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.820217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.820247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.820608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.820637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.821048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.821085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.821410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.821440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.821765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.821797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.331 [2024-11-05 04:40:43.822194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.331 [2024-11-05 04:40:43.822224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.331 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.822601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.822631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.822874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.822906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.823291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.823321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.823686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.823715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.824073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.824104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.824514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.824544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.824789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.824820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.825207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.825237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.825380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.825409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.825658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.825691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.825919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.825950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.826327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.826357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.826695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.826724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.826881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.826913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.827273] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.827302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.827624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.827654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.828040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.828072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.828351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.828382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.828497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.828527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.828849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.828880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.829204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.829234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.829563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.829593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.829943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.829975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.830313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.830343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.830696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.830726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.831175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.831207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.831452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.831482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.831842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.831874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.832244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.832274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.832631] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.832661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.833046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.833077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.833315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.833345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.833732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.833787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.834046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.834077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.834428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.834458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.834793] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.834824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.835169] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.835206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.835525] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.332 [2024-11-05 04:40:43.835555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.332 qpair failed and we were unable to recover it. 00:29:30.332 [2024-11-05 04:40:43.835902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.835934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.836276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.836307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.836718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.836763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.837100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.837130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.837426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.837456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.837633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.837663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.838053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.838085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.838438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.838468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.838727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.838763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.839094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.839124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.839533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.839563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.839922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.839954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.840317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.840347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.840617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.840647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.840940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.840970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.841278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.841308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.841567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.841600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.841849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.841880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.842237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.842267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.842622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.842652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.842919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.842950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.843276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.843306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.843660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.843690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.844057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.844088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.844380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.844410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.844681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.844712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.845003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.845038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.845281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.845312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.845657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.845688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.846077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.846109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.846528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.846557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.846924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.846956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.847285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.847315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.847559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.847589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.847825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.847857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.848281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.848312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.848650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.848680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.849087] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.849117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.333 qpair failed and we were unable to recover it. 00:29:30.333 [2024-11-05 04:40:43.849488] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.333 [2024-11-05 04:40:43.849523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.849657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.849686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.849934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.849966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.850189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.850218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.850508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.850537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.850765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.850796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.851163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.851193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.851575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.851605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.851974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.852006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.852369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.852399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.852653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.852683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.853109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.853140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.853369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.853400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.853797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.853829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.854219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.854250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.854613] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.854643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.855017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.855048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.855378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.855409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.855741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.855782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.856074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.856104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.856344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.856375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.856609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.856639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.856996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.857027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.857406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.857436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.857777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.857810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.858114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.858145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.858513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.858543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.858932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.858965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.859298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.859329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.859667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.859698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.859862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.859893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.860263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.860292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.860641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.860672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.861047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.334 [2024-11-05 04:40:43.861079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.334 qpair failed and we were unable to recover it. 00:29:30.334 [2024-11-05 04:40:43.861429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.861459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.861783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.861815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.862199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.862230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.862564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.862596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.862935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.862968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.863333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.863364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.863741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.863784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.864148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.864178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.864417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.864447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.864766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.864799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.865190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.865220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.865563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.865593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.865930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.865963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.866300] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.866330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.866669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.866699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.867084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.867115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.867364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.867394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.867738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.867774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.868120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.868150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.868367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.868397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.868743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.868781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.869124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.869155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.869423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.869453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.869798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.869831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.870196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.870227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.870572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.870602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.870966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.870997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.871340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.871372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.871712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.871742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.871988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.872019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.872370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.872401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.872607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.872639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.873016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.873048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.873440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.873471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.873833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.873864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.874218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.874249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.874622] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.874652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.874992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.335 [2024-11-05 04:40:43.875024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.335 qpair failed and we were unable to recover it. 00:29:30.335 [2024-11-05 04:40:43.875381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.875411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.875764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.875795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.876181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.876211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.876589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.876619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.876838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.876870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.877226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.877257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.877601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.877631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.877880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.877913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.878031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.878069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.878418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.878449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.878717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.878755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.879140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.879171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.879529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.879559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.879921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.879953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.880378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.880408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.880640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.880671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.881053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.881083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.881422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.881453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.881797] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.881829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.882221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.882253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.882504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.882535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.882912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.882944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.883173] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.883203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.883544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.883574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.883928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.883959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.884331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.884362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.884725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.884762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.885147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.885177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.885402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.885432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.885785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.885818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.886204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.886233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.886446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.886476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.886897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.886929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.887331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.887359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:30.336 [2024-11-05 04:40:43.887362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.887697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.887726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.888092] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.888123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.888489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.888521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.888876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.336 [2024-11-05 04:40:43.888907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.336 qpair failed and we were unable to recover it. 00:29:30.336 [2024-11-05 04:40:43.889277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.889307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.889682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.889712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.890085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.890117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.890461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.890491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.890878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.890913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.891264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.891295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.891530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.891561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.891771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.891803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.892167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.892198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.892437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.892467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.892856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.892888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.893262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.893292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.893639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.893671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.894002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.894033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.894372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.894403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.894762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.894795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.895046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.895079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.895283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.895314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.895636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.895667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.896014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.896047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.896392] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.896423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.896800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.896831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.897207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.897237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.897598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.897636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.897900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.897934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.898299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.898331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.898702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.898732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.899078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.899109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.899443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.899473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.899693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.899723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.900075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.900107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.900474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.900505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.900737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.900784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.901146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.901176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.901390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.901420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.901739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.901780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.902163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.902193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.902561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.902592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.337 [2024-11-05 04:40:43.902927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.337 [2024-11-05 04:40:43.902959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.337 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.903298] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.903328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.903447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.903481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.903882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.903914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.904248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.904277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.904644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.904673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.905039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.905071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.905444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.905473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.905830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.905861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.906229] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.906259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.906609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.906639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.907055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.907086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.907446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.907477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.907809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.907841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.908180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.908211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.908357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.908387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.908757] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.908788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.909148] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.909178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.909533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.909562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.909941] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.909973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.910347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.910377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.910585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.910618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.910966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.910998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.911360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.911392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.911761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.911793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.912124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.912161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.912516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.912550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.912883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.912914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.913265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.913295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.913642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.913673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.914004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.914035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.914257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.914290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.914513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.914542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.914870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.338 [2024-11-05 04:40:43.914902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.338 qpair failed and we were unable to recover it. 00:29:30.338 [2024-11-05 04:40:43.915261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.915291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.915658] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.915687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.916052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.916083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.916512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.916542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.916763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.916794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.917032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.917065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.917428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.917458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.917778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.917809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.918179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.918209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.918559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.918589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.918936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.918967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.919307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.919337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.919686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.919715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.920105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.920137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.920530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.920561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.920903] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.920934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.921313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.921344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.921671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.921701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.922080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.922112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.922455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.922485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.922824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.922856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.923217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.923247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.923609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.923640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.923986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.924017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.924364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.924394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.924721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.924760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.925126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.925158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.925538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.925569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.925794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.925824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.926186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.926218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.926587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.926618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.926978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.927017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.927368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.927398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.927729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.927737] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.339 [2024-11-05 04:40:43.927768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b9[2024-11-05 04:40:43.927775] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events0 with addr=10.0.0.2, port=4420 00:29:30.339 at runtime. 00:29:30.339 [2024-11-05 04:40:43.927786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.339 [2024-11-05 04:40:43.927794] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.927800] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.339 [2024-11-05 04:40:43.928116] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.928147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.928497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.928527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.339 [2024-11-05 04:40:43.928879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.339 [2024-11-05 04:40:43.928910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.339 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.929294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.929325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.929673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.929704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.929668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:30.340 [2024-11-05 04:40:43.929820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:30.340 [2024-11-05 04:40:43.930047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.930078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.930143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:30.340 [2024-11-05 04:40:43.930153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:30.340 [2024-11-05 04:40:43.930430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.930459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.930702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.930733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.931107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.931139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.931473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.931504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.931872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.931903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.932242] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.932273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.932653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.932683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.933033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.933064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.933410] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.933441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.933795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.933826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.934175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.934205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.934542] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.934573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.934825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.934856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.935204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.935234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.935595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.935625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.935956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.935989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.936369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.936399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.936759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.936790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.937181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.937211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.937544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.937575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.340 [2024-11-05 04:40:43.937819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.340 [2024-11-05 04:40:43.937851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.340 qpair failed and we were unable to recover it. 00:29:30.623 [2024-11-05 04:40:43.938207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.623 [2024-11-05 04:40:43.938240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.623 qpair failed and we were unable to recover it. 00:29:30.623 [2024-11-05 04:40:43.938468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.623 [2024-11-05 04:40:43.938498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.623 qpair failed and we were unable to recover it. 00:29:30.623 [2024-11-05 04:40:43.938834] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.623 [2024-11-05 04:40:43.938865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.623 qpair failed and we were unable to recover it. 00:29:30.623 [2024-11-05 04:40:43.939223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.623 [2024-11-05 04:40:43.939253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.623 qpair failed and we were unable to recover it. 00:29:30.623 [2024-11-05 04:40:43.939601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.623 [2024-11-05 04:40:43.939631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.623 qpair failed and we were unable to recover it. 00:29:30.623 [2024-11-05 04:40:43.939973] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.623 [2024-11-05 04:40:43.940004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.623 qpair failed and we were unable to recover it. 00:29:30.623 [2024-11-05 04:40:43.940347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.940378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.940717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.940795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.941151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.941182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.941564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.941594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.941930] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.941962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.942320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.942350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.942708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.942738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.943143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.943174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.943508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.943539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.943894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.943926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.944302] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.944332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.944691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.944722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.945078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.945110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.945454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.945483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.945825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.945856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.946224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.946255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.946588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.946618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.946986] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.947017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.947381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.947411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.947773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.947805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.948159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.948188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.948524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.948555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.948800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.948832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.949055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.949085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.949428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.949458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.949805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.949836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.950232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.950264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.950585] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.950615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.950939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.950970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.951316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.951346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.951724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.951766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.952132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.952162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.952375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.952405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.952673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.952703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.953061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.953095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.953492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.953522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.953883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.953915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.624 [2024-11-05 04:40:43.954268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.624 [2024-11-05 04:40:43.954299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.624 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.954652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.954682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.955025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.955056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.955444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.955474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.955813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.955851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.956230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.956260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.956618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.956648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.956993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.957024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.957366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.957397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.957766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.957798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.958170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.958200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.958537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.958568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.958911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.958942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.959255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.959287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.959649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.959679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.960049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.960081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.960444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.960475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.960838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.960868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.961210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.961241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.961578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.961608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.961938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.961970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.962305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.962336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.962697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.962728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.963088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.963117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.963454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.963485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.963708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.963739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.964131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.964162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.964490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.964520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.964922] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.964952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.965259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.965289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.965651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.965682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.966004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.966038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.966220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.966250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.966587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.966617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.625 qpair failed and we were unable to recover it. 00:29:30.625 [2024-11-05 04:40:43.966971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.625 [2024-11-05 04:40:43.967004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.967224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.967259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.967461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.967491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.967726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.967769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.968010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.968045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.968441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.968472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.968683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.968714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.968959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.968991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.969282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.969314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.969649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.969679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.970016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.970056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.970384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.970416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.970633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.970662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.970910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.970948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.971196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.971232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.971593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.971625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.971844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.971875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.972221] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.972251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.972595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.972625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.972841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.972871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.973117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.973147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.973346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.973376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.973636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.973668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.974053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.974086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.974423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.974455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.974837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.974868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.975075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.975104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.975424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.975454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.975791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.975823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.976086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.976119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.976464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.976495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.976609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.976638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.976985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.977017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.977347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.977378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.977723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.977761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.978110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.978142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.978374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.978407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.978762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.626 [2024-11-05 04:40:43.978795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.626 qpair failed and we were unable to recover it. 00:29:30.626 [2024-11-05 04:40:43.979125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.979155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.979532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.979562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.979800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.979831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.980124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.980154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.980487] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.980518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.980777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.980810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.981125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.981157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.981513] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.981543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.981892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.981924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.982316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.982346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.982720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.982757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.983014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.983044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.983405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.983443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.983766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.983796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.984194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.984224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.984566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.984597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.984926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.984957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.985192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.985221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.985593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.985623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.985992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.986022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.986386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.986415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.986738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.986776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.987105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.987136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.987342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.987372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.987733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.987770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.987893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.987924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.988264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.988294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.988509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.988539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.988743] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.988800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.989182] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.989214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.989535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.989566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.989925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.989958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.990178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.990208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.990450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.990482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.990692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.990724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.990979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.627 [2024-11-05 04:40:43.991010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.627 qpair failed and we were unable to recover it. 00:29:30.627 [2024-11-05 04:40:43.991212] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.991240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.991572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.991602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.991731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.991768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.992133] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.992163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.992504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.992537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.992896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.992927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.993243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.993275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.993618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.993649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.993862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.993893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.994132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.994161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.994515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.994543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.994894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.994925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.995272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.995304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.995659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.995689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.996047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.996079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.996285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.996314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.996660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.996696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.997064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.997096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.997309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.997338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.997523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.997552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.997895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.997927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.998286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.998316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.998539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.998568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.998994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.999025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.999230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.999259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.999486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.999516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:43.999722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:43.999759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:44.000103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:44.000134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:44.000477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:44.000508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:44.000866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:44.000896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:44.001281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:44.001312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:44.001646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.628 [2024-11-05 04:40:44.001677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.628 qpair failed and we were unable to recover it. 00:29:30.628 [2024-11-05 04:40:44.002012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.002043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.002382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.002411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.002759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.002790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.003138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.003168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.003514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.003544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.003870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.003900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.004228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.004257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.004609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.004639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.004998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.005031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.005368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.005398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.005768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.005801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.006069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.006099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.006440] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.006471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.006794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.006828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.007217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.007247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.007591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.007621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.007995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.008029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.008349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.008381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.008719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.008757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.009054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.009084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.009420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.009451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.009824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.009855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.010079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.010110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.010490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.010520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.010876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.010914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.011276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.011306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.011662] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.011694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.012041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.012071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.012426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.012455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.012832] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.012866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.013189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.013219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.013569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.013600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.013931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.013965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.014306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.014335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.014697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.629 [2024-11-05 04:40:44.014728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.629 qpair failed and we were unable to recover it. 00:29:30.629 [2024-11-05 04:40:44.015090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.015123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.015461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.015490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.015737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.015775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.016181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.016211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.016570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.016601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.016931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.016962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.017295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.017326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.017681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.017711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.018057] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.018090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.018405] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.018436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.018787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.018818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.019147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.019177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.019520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.019551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.019911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.019942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.020321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.020352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.020679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.020710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.021126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.021158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.021497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.021528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.021858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.021889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.022247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.022277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.022483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.022513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.022886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.022919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.023281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.023311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.023671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.023701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.024048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.024079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.024436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.024466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.024820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.024851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.025079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.025112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.025455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.025484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.025701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.025737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.026146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.026178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.026535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.026567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.026904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.630 [2024-11-05 04:40:44.026936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.630 qpair failed and we were unable to recover it. 00:29:30.630 [2024-11-05 04:40:44.027286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.027319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.027685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.027717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.027989] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.028020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.028355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.028385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.028764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.028796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.029156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.029187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.029550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.029581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.029934] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.029967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.030320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.030350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.030712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.030742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.031119] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.031150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.031490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.031521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.031871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.031903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.032235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.032266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.032594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.032623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.032991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.033022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.033264] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.033295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.033608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.033638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.033981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.034014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.034369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.034399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.034755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.034787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.035007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.035036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.035359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.035390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.035733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.035774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.036142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.036176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.036505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.036536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.036895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.036927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.037263] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.037294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.037643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.037673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.038040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.038072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.038412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.038442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.038785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.038815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.631 [2024-11-05 04:40:44.039217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.631 [2024-11-05 04:40:44.039248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.631 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.039617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.039648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.040004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.040036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.040244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.040274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.040627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.040664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.041023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.041054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.041433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.041464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.041804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.041834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.042224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.042254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.042577] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.042607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.042964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.042995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.043199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.043229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.043534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.043564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.043893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.043926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.044162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.044193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.044523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.044553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.044780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.044813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.045066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.045097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.045468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.045499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.045845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.045877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.046230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.046259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.046630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.046660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.046995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.047028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.047398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.047429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.047780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.047811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.048021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.048051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.048402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.048432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.048791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.048823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.049144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.049174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.049281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.049314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.049546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.049575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.049924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.049957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.050178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.050208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.050561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.050591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.050949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.050980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.051331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.051362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.632 qpair failed and we were unable to recover it. 00:29:30.632 [2024-11-05 04:40:44.051567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.632 [2024-11-05 04:40:44.051597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.051995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.052026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.052382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.052413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.052616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.052648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.052847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.052878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.053256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.053287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.053667] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.053698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.054071] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.054103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.054444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.054482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.054579] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.054610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.054977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.055009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.055214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.055244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.055596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.055627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.055965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.055997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.056214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.056246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.056466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.056496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.056854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.056885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.057257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.057287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.057643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.057675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.058014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.058045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.058390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.058421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.058785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.058816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.059041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.059071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.059416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.059447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.059794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.059825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.060062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.060091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.060432] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.060462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.060680] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.060709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.060978] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.061010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.061329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.061361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.061677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.061706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.062043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.633 [2024-11-05 04:40:44.062075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.633 qpair failed and we were unable to recover it. 00:29:30.633 [2024-11-05 04:40:44.062281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.062310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.062545] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.062575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.062817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.062852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.063100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.063131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.063354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.063383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.063670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.063700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.063957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.063989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.064318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.064349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.064711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.064741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.065118] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.065149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.065446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.065477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.065811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.065842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.066250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.066281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.066495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.066525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.066878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.066909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.067278] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.067309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.067675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.067710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.067955] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.067986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.068243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.068278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.068589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.068620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.069002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.069033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.069241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.069270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.069490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.069522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.069727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.069767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.070129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.070159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.070365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.070396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.070765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.070797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.071153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.071185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.071418] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.071449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.071776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.071807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.072036] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.072068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.072296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.072326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.072565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.072595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.072837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.072872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.073208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.634 [2024-11-05 04:40:44.073239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.634 qpair failed and we were unable to recover it. 00:29:30.634 [2024-11-05 04:40:44.073456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.073485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.073677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.073708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.073988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.074021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.074387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.074418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.074766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.074798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.075141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.075171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.075503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.075533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.075897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.075928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.076157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.076188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.076541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.076572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.076931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.076962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.077289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.077321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.077677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.077707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.077991] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.078026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.078362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.078392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.078773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.078805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.079161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.079191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.079564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.079594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.079949] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.079982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.080308] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.080339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.080669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.080699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.081047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.081079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.081358] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.081388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.081715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.081762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.082140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.082171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.082523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.082554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.082925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.082959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.083295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.083326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.083685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.083715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.084069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.084100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.084346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.084377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.084734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.084776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.085144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.085175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.635 [2024-11-05 04:40:44.085530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.635 [2024-11-05 04:40:44.085561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.635 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.085775] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.085807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.086084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.086115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.086462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.086493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.086731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.086768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.087014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.087044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.087369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.087399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.087688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.087717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.087961] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.087992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.088378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.088408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.088774] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.088804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.089156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.089188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.089593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.089623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.089969] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.090001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.090374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.090405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.090765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.090802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.091139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.091172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.091529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.091559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.091706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.091736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.091981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.092013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.092397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.092428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.092800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.092832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.093214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.093245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.093590] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.093622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.093946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.093977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.094325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.094354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.094696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.094729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.095081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.095111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.095475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.095505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.095884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.095917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.096291] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.096321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.096661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.096691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.097079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.097111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.097476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.097507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.097795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.097826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.098161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.098192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.098533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.098565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.098849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.098881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.636 [2024-11-05 04:40:44.099207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.636 [2024-11-05 04:40:44.099238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.636 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.099649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.099678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.100012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.100045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.100412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.100443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.100817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.100849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.101214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.101245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.101475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.101505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.101889] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.101920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.102282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.102314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.102532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.102562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.102895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.102927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.103320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.103349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.103665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.103696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.104090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.104124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.104469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.104499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.104835] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.104866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.105191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.105221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.105562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.105599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.105838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.105871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.106218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.106248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.106490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.106519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.106840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.106873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.107234] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.107265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.107594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.107625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.107967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.108003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.108394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.108425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.108770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.108803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.109199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.109229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.109608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.109637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.109877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.109908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.110342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.110372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.110741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.110780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.111114] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.111146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.111494] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.111524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.111887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.111920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.112245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.112276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.112492] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.112521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.112754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.112785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.637 qpair failed and we were unable to recover it. 00:29:30.637 [2024-11-05 04:40:44.113192] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.637 [2024-11-05 04:40:44.113222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.113580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.113612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.113948] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.113980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.114323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.114354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.114671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.114702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.115178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.115208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.115570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.115601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.115967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.115999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.116383] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.116415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.116629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.116660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.116768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.116797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.117048] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.117077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.117429] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.117459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.117812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.117844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.118220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.118250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.118601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.118636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.118850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.118882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.119091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.119120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.119468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.119497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.119818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.119854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.120077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.120108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.120433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.120464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.120724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.120766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.121141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.121172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.121453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.121482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.121763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.121796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.122012] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.122042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.122356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.122385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.122727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.122765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.123125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.123155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.123519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.123549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.123913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.123946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.124158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.638 [2024-11-05 04:40:44.124187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.638 qpair failed and we were unable to recover it. 00:29:30.638 [2024-11-05 04:40:44.124619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.124650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.125039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.125070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.125417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.125447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.125825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.125855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.126210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.126239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.126630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.126660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.127014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.127047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.127414] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.127444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.127827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.127860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.128235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.128266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.128598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.128630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.128984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.129016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.129390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.129420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.129773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.129805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.130051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.130081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.130424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.130455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.130813] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.130845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.130962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.130992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.131232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.131262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.131660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.131689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.132108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.132140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.132512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.132542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.132905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.132938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.133146] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.133177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.133496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.133525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.133640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.133670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.133901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.133939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.134284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.134315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.134531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.134562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.134945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.134978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.135323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.135354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.135705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.135735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.136096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.136126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.639 qpair failed and we were unable to recover it. 00:29:30.639 [2024-11-05 04:40:44.136491] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.639 [2024-11-05 04:40:44.136521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.136864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.136895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.137232] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.137262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.137582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.137611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.137938] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.137969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.138379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.138408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.138663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.138692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.139150] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.139183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.139517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.139547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.139855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.139887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.140254] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.140284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.140537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.140567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.140814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.140846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.141235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.141265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.141489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.141518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.141872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.141904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.142151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.142180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.142421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.142451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.142809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.142840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.143163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.143194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.143569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.143600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.143929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.143961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.144332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.144362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.144718] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.144758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.145014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.145044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.145382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.145412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.145664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.145695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.146040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.146072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.146444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.146475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.146780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.146832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.147198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.147229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.147602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.147633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.147880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.147911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.148261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.148298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.148646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.148678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.149021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.149052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.149425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.640 [2024-11-05 04:40:44.149457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.640 qpair failed and we were unable to recover it. 00:29:30.640 [2024-11-05 04:40:44.149785] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.149816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.150154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.150183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.150505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.150536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.150881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.150912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.151260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.151289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.151644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.151674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.152044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.152075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.152446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.152476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.152819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.152852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.153238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.153268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.153591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.153621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.153974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.154004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.154347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.154379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.154740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.154796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.155152] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.155183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.155402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.155432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.155768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.155799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.156094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.156125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.156435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.156465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.156799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.156830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.157207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.157238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.157584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.157615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.157968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.157999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.158209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.158239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.158576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.158605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.158964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.158996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.159190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.159220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.159421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.159449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.159807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.159838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.159929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.159957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.160258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.641 [2024-11-05 04:40:44.160287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.641 qpair failed and we were unable to recover it. 00:29:30.641 [2024-11-05 04:40:44.160617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.160647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.160867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.160896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.161259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.161288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.161638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.161667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.162033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.162063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.162441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.162479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.162691] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.162723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.162959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.162988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.163385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.163415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.163742] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.163782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.164151] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.164181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.164537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.164568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.164898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.164930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.165149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.165177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.165512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.165542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.165908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.165941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.166259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.166288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.166655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.166685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.167053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.167083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.167311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.167341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.167699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.167728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.167959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.167989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.168356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.168387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.168716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.168765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.169006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.169036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.169356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.169387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.169822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.169853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.170231] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.170262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.170647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.170677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.170917] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.170947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.171314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.171344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.171559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.171588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.171828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.171860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.172222] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.172253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.172433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.172461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.172807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.642 [2024-11-05 04:40:44.172838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.642 qpair failed and we were unable to recover it. 00:29:30.642 [2024-11-05 04:40:44.173070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.173099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.173421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.173450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.173801] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.173832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.174049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.174079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.174443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.174474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.174851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.174881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.175082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.175110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.175461] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.175491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.175865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.175898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.176121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.176157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.176371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.176401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.176783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.176815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.177011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.177040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.177290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.177325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.177538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.177570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.177809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.177844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.178065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.178094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.178316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.178346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.178764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.178795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.179018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.179047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.179398] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.179428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.179643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.179673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.180028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.180059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.180420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.180452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.180817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.180848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.181090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.181120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.181303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.181334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.181670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.181700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.181947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.643 [2024-11-05 04:40:44.181981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.643 qpair failed and we were unable to recover it. 00:29:30.643 [2024-11-05 04:40:44.182345] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.182374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.182721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.182762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.183109] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.183140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.183381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.183410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.183523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.183555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.183733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.183773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.184164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.184194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.184406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.184435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.184664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.184695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.185083] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.185115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.185455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.185487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.185868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.185899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.186228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.186259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.186502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.186534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.186871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.186902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.187285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.187315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.187526] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.187557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.187779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.187809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.188163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.188193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.188573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.188603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.188870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.188907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.189241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.189271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.189606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.189636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.189848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.189880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.189993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.190024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.190387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.190416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.190767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.190799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.191107] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.191137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.191489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.191520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.191711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.191742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.191975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.192008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.192325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.192357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.192571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.192603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.192970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.193000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.193356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.193388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.193756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.193789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.194149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.194177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.644 [2024-11-05 04:40:44.194518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.644 [2024-11-05 04:40:44.194547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.644 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.194900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.194933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.195243] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.195272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.195599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.195628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.195988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.196018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.196364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.196395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.196783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.196814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.197053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.197083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.197427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.197456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.197817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.197849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.198224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.198255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.198623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.198653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.199000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.199032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.199384] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.199415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.199735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.199774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.200160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.200190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.200428] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.200458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.200804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.200834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.201176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.201207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.201565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.201595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.201925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.201957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.202296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.202327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.202647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.202678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.203061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.203099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.203435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.203467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.203672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.203703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.204047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.204079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.204449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.204480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.204700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.204734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.205099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.205132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.205484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.205515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.205878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.205910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.206274] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.206305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.206645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.206677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.207047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.207079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.207347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.207380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.207715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.207744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.645 [2024-11-05 04:40:44.208084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.645 [2024-11-05 04:40:44.208117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.645 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.208482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.208512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.208722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.208771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.209115] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.209145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.209353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.209382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.209717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.209755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.210106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.210137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.210485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.210516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.210879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.210913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.211290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.211320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.211688] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.211718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.212061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.212095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.212448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.212478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.212845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.212878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.213276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.213308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.213645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.213677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.214044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.214075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.214442] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.214474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.214839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.214870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.215227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.215258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.215599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.215631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.215927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.215959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.216283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.216314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.216669] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.216700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.217038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.217071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.217437] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.217468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.217791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.217829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.218188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.218219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.218465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.218495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.218841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.218873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.219280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.219311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.219648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.219679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.220023] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.220054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.220375] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.220406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.220616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.220646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.220996] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.646 [2024-11-05 04:40:44.221027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.646 qpair failed and we were unable to recover it. 00:29:30.646 [2024-11-05 04:40:44.221377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.221409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.221727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.221765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.222121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.222151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.222495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.222525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.222900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.222935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.223295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.223326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.223684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.223715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.224075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.224107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.224465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.224495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.224867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.224898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.225269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.225300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.225634] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.225667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.226027] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.226059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.226426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.226457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.226773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.226805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.227153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.227182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.227546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.227577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.227837] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.227868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.228249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.228279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.228619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.228651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.229009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.229040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.229368] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.229399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.229763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.229795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.230137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.230167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.230399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.230429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.230812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.230844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.231168] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.231200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.231528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.231559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.231906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.231937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.232163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.232194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.232523] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.232560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.232901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.232933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.233288] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.233320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.233685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.233716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.234090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.234121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.234341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.234371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.234673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.234703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.235052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.235085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.647 qpair failed and we were unable to recover it. 00:29:30.647 [2024-11-05 04:40:44.235297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.647 [2024-11-05 04:40:44.235326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.235665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.235696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.236034] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.236067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.236181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.236214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.236462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.236492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.236710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.236739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.237000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.237031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.237356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.237389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.237597] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.237628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.237900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.237935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.238320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.238350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.238603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.238632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.238975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.239008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.239344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.239375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.239562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.239591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.239904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.239937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.240266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.240297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.240630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.240660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.241006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.241038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.241279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.241316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.241648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.241680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.242017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.242051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.242262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.242292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.242553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.242583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.242914] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.242946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.243277] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.243309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.243638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.243668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.244058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.244090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.244415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.244446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.244806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.244837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.245194] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.245226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.245441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.245472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.245887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.648 [2024-11-05 04:40:44.245926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.648 qpair failed and we were unable to recover it. 00:29:30.648 [2024-11-05 04:40:44.246134] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.649 [2024-11-05 04:40:44.246164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.649 qpair failed and we were unable to recover it. 00:29:30.649 [2024-11-05 04:40:44.246406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.649 [2024-11-05 04:40:44.246435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.649 qpair failed and we were unable to recover it. 00:29:30.649 [2024-11-05 04:40:44.246664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.649 [2024-11-05 04:40:44.246694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.649 qpair failed and we were unable to recover it. 00:29:30.649 [2024-11-05 04:40:44.246913] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.649 [2024-11-05 04:40:44.246944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.649 qpair failed and we were unable to recover it. 00:29:30.649 [2024-11-05 04:40:44.247271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.649 [2024-11-05 04:40:44.247300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.649 qpair failed and we were unable to recover it. 00:29:30.649 [2024-11-05 04:40:44.247657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.649 [2024-11-05 04:40:44.247686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.649 qpair failed and we were unable to recover it. 00:29:30.649 [2024-11-05 04:40:44.247939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.649 [2024-11-05 04:40:44.247973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.649 qpair failed and we were unable to recover it. 00:29:30.649 [2024-11-05 04:40:44.248199] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.649 [2024-11-05 04:40:44.248229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.649 qpair failed and we were unable to recover it. 00:29:30.649 [2024-11-05 04:40:44.248570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.649 [2024-11-05 04:40:44.248600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.649 qpair failed and we were unable to recover it. 00:29:30.649 [2024-11-05 04:40:44.248970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.249001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.249251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.249287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.249388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.249417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.249659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.249690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.250126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.250160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.250520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.250550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.250769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.250817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.251158] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.251189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.251546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.251578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.251946] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.251977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.252309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.925 [2024-11-05 04:40:44.252340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.925 qpair failed and we were unable to recover it. 00:29:30.925 [2024-11-05 04:40:44.252471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.252504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.252737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.252777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.253014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.253044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.253314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.253344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.253591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.253623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.253836] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.253867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.254237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.254268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.254660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.254690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.254936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.254968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.255189] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.255221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.255580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.255609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.255814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.255846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.256062] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.256094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.256462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.256493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.256717] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.256756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.256928] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.256958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.257053] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.257081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.257426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.257456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.257736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.257776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.258024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.258055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.258395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.258428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.258776] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.258809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.259184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.259213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.259582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.259614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.259833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.259866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.260217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.260249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.260635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.260666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.261014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.261046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.261382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.261414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.261777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.261809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.262132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.262163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.262479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.262508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.262874] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.262905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.263256] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.263289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.263630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.263661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.264039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.264071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.264409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.264438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.264763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.264793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.926 qpair failed and we were unable to recover it. 00:29:30.926 [2024-11-05 04:40:44.265188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.926 [2024-11-05 04:40:44.265218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.265581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.265611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.265970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.266003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.266218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.266249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.266601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.266632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.266963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.266993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.267213] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.267243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.267483] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.267512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.267880] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.267919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.268301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.268332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.268674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.268704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.269043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.269075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.269316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.269346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.269554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.269583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.269927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.269959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.270320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.270352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.270581] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.270611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.270964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.270995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.271336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.271368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.271730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.271772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.271971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.272000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.272233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.272264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.272371] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.272402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.272648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.272679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.273008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.273041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.273271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.273300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.273635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.273667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.273918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.273951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.274188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.274218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.274430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.274460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.274802] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.274835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.275039] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.275069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.275443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.275473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.275686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.275715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.275966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.275998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.276236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.276267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.276611] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.276640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.276858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.927 [2024-11-05 04:40:44.276889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.927 qpair failed and we were unable to recover it. 00:29:30.927 [2024-11-05 04:40:44.277126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.277156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.277378] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.277410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.277729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.277771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.278026] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.278056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.278297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.278330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.278519] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.278553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.278895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.278928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.279159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.279188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.279394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.279423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.279538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.279571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.279887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.279932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.280176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.280205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.280421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.280451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.280811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.280843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.281247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.281277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.281493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.281522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.281864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.281896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.282249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.282278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.282645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.282676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.282897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.282927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.283258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.283288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.283653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.283684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.284035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.284066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.284397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.284428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.284772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.284805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.285124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.285154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.285527] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.285557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.285921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.285953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.286052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.286080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.286347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.286377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.286695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.286725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.287081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.287113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.287328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.287357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.287725] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.287764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.288000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.288030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.288276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.288304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.288549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.288582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.288870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.928 [2024-11-05 04:40:44.288902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.928 qpair failed and we were unable to recover it. 00:29:30.928 [2024-11-05 04:40:44.289139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.289168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.289546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.289576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.289789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.289819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.290197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.290229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.290596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.290625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.290964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.290996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.291329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.291360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.291562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.291591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.291971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.292003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.292363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.292394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.292731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.292771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.292865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.292894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.293245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.293283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.293609] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.293640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.293860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.293891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.294096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.294126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.294355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.294386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.294790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.294821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.295211] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.295241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.295333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.295360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.295701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.295731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.296103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.296134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.296341] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.296370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.296777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.296808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.297191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.297221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.297595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.297625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.297962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.297994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.298329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.298359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.298699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.298731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.298940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.298972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.299224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.299253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.299603] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.299633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.299856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.299886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.300276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.300306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.300675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.300705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.300865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.300895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.301135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.301164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.301502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.929 [2024-11-05 04:40:44.301534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.929 qpair failed and we were unable to recover it. 00:29:30.929 [2024-11-05 04:40:44.301908] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.301941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.302286] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.302318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.302668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.302698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.303081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.303113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.303445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.303476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.303821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.303851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.304085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.304114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.304463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.304493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.304902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.304934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.305303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.305334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.305546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.305575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.305896] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.305928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.306320] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.306351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.306557] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.306588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.306999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.307036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.307400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.307431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.307645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.307674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.307904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.307936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.308290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.308320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.308643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.308673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.309063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.309095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.309446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.309478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.309695] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.309726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.310108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.310139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.310505] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.310536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.310878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.310912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.311343] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.311372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.311711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.311742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.312126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.312158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.312363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.312392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.312720] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.312757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.313006] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.930 [2024-11-05 04:40:44.313038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.930 qpair failed and we were unable to recover it. 00:29:30.930 [2024-11-05 04:40:44.313365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.313396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.313762] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.313796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.314145] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.314174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.314544] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.314575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.314943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.314977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.315313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.315343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.315704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.315735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.316065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.316097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.316466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.316497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.316842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.316874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.317226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.317257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.317627] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.317658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.317987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.318020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.318365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.318396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.318756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.318790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.319105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.319135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.319462] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.319493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.319740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.319781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.320126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.320156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.320531] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.320560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.320769] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.320799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.321035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.321066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.321453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.321489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.321852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.321882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.322235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.322265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.322629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.322661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.323007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.323039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.323411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.323441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.323721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.323760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.324125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.324157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.324511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.324543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.324927] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.324958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.325339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.325370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.325601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.325631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.326004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.326036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.326373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.326403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.326786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.326818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.931 [2024-11-05 04:40:44.327040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.931 [2024-11-05 04:40:44.327069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.931 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.327386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.327418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.327795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.327825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.328214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.328245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.328587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.328618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.328994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.329026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.329276] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.329307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.329606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.329636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.329866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.329898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.330237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.330269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.330642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.330671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.330884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.330916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.331311] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.331342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.331588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.331617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.331972] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.332002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.332244] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.332273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.332612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.332641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.333007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.333042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.333434] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.333464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.333833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.333865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.334268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.334299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.334661] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.334692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.334924] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.334956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.335334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.335364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.335711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.335742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.336088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.336125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.336480] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.336511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.336780] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.336811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.337052] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.337082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.337449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.337479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.337723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.337772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.338137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.338168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.338510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.338542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.338895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.338927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.339271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.339301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.339546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.339575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.339933] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.339966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.340339] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.340371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.932 [2024-11-05 04:40:44.340701] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.932 [2024-11-05 04:40:44.340731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.932 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.341100] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.341132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.341481] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.341509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.341731] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.341771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.342025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.342055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.342423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.342453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.342795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.342826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.343186] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.343217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.343490] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.343520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.343849] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.343880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.344265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.344296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.344602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.344632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.344848] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.344879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.345122] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.345151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.345502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.345534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.345786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.345816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.346069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.346099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.346438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.346469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.346828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.346858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.347085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.347115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.347475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.347505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.347734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.347770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.348162] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.348192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.348448] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.348478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.348866] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.348896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.349260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.349291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.349638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.349670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.350032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.350075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.350435] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.350467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.350840] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.350872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.351241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.351272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.351520] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.351550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.351940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.351971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.352078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.352108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.352364] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.352393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.352764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.352796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.353011] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.353041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.353393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.353424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.933 qpair failed and we were unable to recover it. 00:29:30.933 [2024-11-05 04:40:44.353796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.933 [2024-11-05 04:40:44.353829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.354179] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.354209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.354553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.354585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.354952] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.354984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.355227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.355257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.355465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.355497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.355911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.355941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.356181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.356210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.356578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.356607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.356830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.356860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.357238] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.357268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.357641] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.357671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.357886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.357918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.358287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.358317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.358568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.358597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.358830] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.358863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.359098] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.359130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.359421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.359451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.359736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.359774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.360043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.360072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.360282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.360313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.360412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.360440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.360806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.360838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.361217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.361248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.361582] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.361611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.361965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.361996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.362342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.362374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.362771] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.362803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.363136] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.363166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.363381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.363417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.363652] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.363684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.363947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.363979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.364350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.364380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.934 [2024-11-05 04:40:44.364728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.934 [2024-11-05 04:40:44.364766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.934 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.365112] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.365142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.365381] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.365410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.365765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.365796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.366000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.366031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.366394] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.366423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.366790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.366821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.367218] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.367249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.367604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.367634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.367964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.367994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.368328] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.368358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.368702] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.368733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.369108] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.369139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.369386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.369417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.369637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.369669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.370018] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.370051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.370271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.370301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.370623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.370656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.371007] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.371039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.371367] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.371400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.371614] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.371646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.372022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.372054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.372436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.372466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.372723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.372765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.373016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.373045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.373290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.373322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.373536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.373567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.373956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.373990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.374215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.374244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.374596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.374629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.374857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.374894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.375233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.375263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.375476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.375506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.375772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.375807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.376163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.376197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.376420] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.376454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.376681] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.376718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.377084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.377116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.935 [2024-11-05 04:40:44.377327] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.935 [2024-11-05 04:40:44.377355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.935 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.377515] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.377544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.377786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.377818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.378184] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.378213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.378445] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.378474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.378708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.378739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.379000] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.379030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.379417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.379449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.379697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.379728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.380103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.380135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.380485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.380516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.380894] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.380926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.381262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.381296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.381524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.381557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.381901] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.381935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.382307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.382337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.382574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.382607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.382976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.383008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.383346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.383378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.383478] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.383508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Write completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Write completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Write completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Write completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Read completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 Write completed with error (sct=0, sc=8) 00:29:30.936 starting I/O failed 00:29:30.936 [2024-11-05 04:40:44.384380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:30.936 [2024-11-05 04:40:44.384905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.384956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.385170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.385182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.385356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.385366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.385704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.385714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.386121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.386169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.386499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.386511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.386713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.386724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.387050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.936 [2024-11-05 04:40:44.387102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.936 qpair failed and we were unable to recover it. 00:29:30.936 [2024-11-05 04:40:44.387305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.387317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.387546] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.387557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.387806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.387817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.388060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.388071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.388297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.388307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.388499] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.388508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.388711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.388722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.389046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.389057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.389227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.389237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.389580] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.389591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.389929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.389939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.390171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.390181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.390361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.390371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.390706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.390716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.391063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.391074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.391397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.391407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.391724] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.391734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.392024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.392036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.392390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.392400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.392758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.392769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.393059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.393069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.393393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.393403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.393617] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.393628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.393829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.393840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.394016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.394025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.394227] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.394236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.394567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.394579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.394925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.394936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.395287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.395299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.395650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.395662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.395956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.395970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.396268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.396280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.396482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.396493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.396844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.396855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.397180] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.397190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.397466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.397478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.397800] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.397813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.398163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.398175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.937 qpair failed and we were unable to recover it. 00:29:30.937 [2024-11-05 04:40:44.398496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.937 [2024-11-05 04:40:44.398507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.398847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.398858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.399205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.399216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.399538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.399550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.399873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.399884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.400103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.400112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.400467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.400477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.400828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.400839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.401166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.401176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.401508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.401519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.401823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.401833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.402171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.402181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.402380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.402390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.402721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.402732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.403072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.403081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.403265] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.403276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.403497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.403509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.403755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.403767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.403977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.403987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.404255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.404266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.404598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.404609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.404845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.404856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.405058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.405068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.405284] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.405296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.405584] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.405594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.405859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.405869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.406217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.406228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.406552] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.406563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.406749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.406760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.406926] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.406936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.407120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.407129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.407485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.407496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.407789] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.407802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.408125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.408135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.408485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.408495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.408829] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.408839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.409197] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.409207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.409533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.409544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.938 qpair failed and we were unable to recover it. 00:29:30.938 [2024-11-05 04:40:44.409768] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.938 [2024-11-05 04:40:44.409778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.410127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.410138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.410465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.410476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.410825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.410836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.411164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.411174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.411502] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.411513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.411822] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.411833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.412191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.412203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.412554] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.412565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.412890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.412901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.413214] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.413224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.413512] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.413524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.413876] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.413888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.414236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.414247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.414433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.414445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.414671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.414682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.414864] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.414875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.415181] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.415192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.415514] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.415525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.415871] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.415882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.416236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.416248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.416569] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.416579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.416919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.416929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.417258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.417268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.417629] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.417641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.418029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.418039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.418374] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.418384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.418715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.418726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.418947] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.418957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.419301] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.419311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.419529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.419539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.419732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.419743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.420079] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.420089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.420417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.939 [2024-11-05 04:40:44.420427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.939 qpair failed and we were unable to recover it. 00:29:30.939 [2024-11-05 04:40:44.420753] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.420766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.421073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.421084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.421402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.421415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.421615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.421625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.422017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.422029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.422347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.422357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.422674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.422686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.422998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.423010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.423333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.423345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.423685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.423695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.423995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.424007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.424333] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.424345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.424700] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.424711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.425047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.425057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.425366] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.425376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.425705] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.425718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.426064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.426075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.426402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.426414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.426772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.426783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.427121] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.427132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.427447] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.427458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.427783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.427795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.428086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.428097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.428373] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.428384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.428697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.428710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.429059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.429070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.429362] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.429373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.429653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.429664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.429963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.429974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.430305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.430316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.430537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.430549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.430859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.430869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.431203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.431215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.431391] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.431401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.431750] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.431761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.432096] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.432107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.432423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.432433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.432752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.940 [2024-11-05 04:40:44.432763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.940 qpair failed and we were unable to recover it. 00:29:30.940 [2024-11-05 04:40:44.433090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.433100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.433425] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.433435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.433766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.433779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.434068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.434078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.434393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.434404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.434719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.434729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.435074] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.435086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.435400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.435411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.435713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.435724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.435950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.435960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.436282] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.436294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.436645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.436656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.436958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.436969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.437135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.437144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.437474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.437486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.437823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.437833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.438155] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.438165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.438486] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.438495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.438823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.438835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.439154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.439164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.439507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.439518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.439878] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.439888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.440251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.440263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.440601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.440612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.440804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.440814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.441139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.441151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.441476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.441486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.441824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.441835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.442176] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.442186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.442516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.442527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.442820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.442830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.443205] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.443215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.443522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.443533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.443854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.443865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.444174] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.444185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.444501] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.444512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.444872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.444882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.445207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.445217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.941 [2024-11-05 04:40:44.445493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.941 [2024-11-05 04:40:44.445503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.941 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.445827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.445837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.446120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.446131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.446450] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.446460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.446779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.446794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.447127] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.447140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.447451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.447462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.447784] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.447796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.448137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.448147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.448469] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.448481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.448824] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.448834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.449160] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.449170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.449443] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.449453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.449779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.449791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.450080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.450090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.450416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.450426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.450736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.450756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.451046] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.451056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.451415] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.451426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.451633] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.451642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.451960] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.451972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.452297] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.452309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.452655] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.452666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.452992] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.453004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.453325] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.453335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.453558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.453567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.453870] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.453881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.454153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.454164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.454477] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.454488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.454828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.454839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.455163] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.455173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.455524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.455535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.455847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.455858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.456202] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.456214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.456562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.456573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.456892] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.456903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.457228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.457238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.457587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.457599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.457900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.942 [2024-11-05 04:40:44.457910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.942 qpair failed and we were unable to recover it. 00:29:30.942 [2024-11-05 04:40:44.458248] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.458259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.458576] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.458587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.458939] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.458951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.459334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.459345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.459659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.459671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.459943] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.459955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.460275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.460287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.460653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.460663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.461008] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.461020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.461331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.461341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.461660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.461672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.461995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.462006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.462322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.462333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.462692] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.462703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.462888] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.462899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.463088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.463097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.463268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.463278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.463601] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.463612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.463798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.463809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.464129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.464140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.464452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.464463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.464779] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.464790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.464975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.464984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.465142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.465152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.465470] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.465481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.465796] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.465806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.466138] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.466148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.466353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.466362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.466685] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.466696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.466855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.466865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.467200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.467210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.467530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.467539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.467860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.467870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.943 [2024-11-05 04:40:44.468094] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.943 [2024-11-05 04:40:44.468103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.943 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.468417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.468427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.468723] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.468733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.469070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.469081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.469290] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.469300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.469643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.469654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.469944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.469954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.470252] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.470262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.470446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.470457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.470787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.470797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.470977] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.470987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.471329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.471341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.471650] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.471662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.471823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.471833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.472137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.472146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.472509] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.472519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.472790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.472801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.473131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.473142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.473334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.473343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.473504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.473514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.473805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.473814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.474131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.474142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.474522] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.474531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.474869] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.474880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.475224] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.475234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.475549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.475560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.475855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.475866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.476064] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.476074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.476379] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.476388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.476433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.476440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.476625] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.476634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.476823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.476833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.477188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.477198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.477511] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.477522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.477844] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.477854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.478203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.478215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.478433] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.478445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.478754] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.478766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.944 [2024-11-05 04:40:44.479060] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.944 [2024-11-05 04:40:44.479071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.944 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.479230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.479239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.479535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.479547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.479858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.479875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.480049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.480059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.480348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.480358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.480674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.480683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.481004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.481013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.481354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.481366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.481553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.481564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.481749] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.481761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.482066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.482077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.482317] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.482326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.482637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.482647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.482804] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.482815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.483005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.483015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.483353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.483362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.483704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.483716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.483904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.483915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.484204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.484214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.484496] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.484506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.484799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.484810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.485032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.485042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.485235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.485244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.485292] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.485299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.485599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.485610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.485904] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.485915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.486200] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.486210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.486528] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.486538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.486859] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.486869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.487233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.487243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.487560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.487572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.487741] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.487758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.487942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.487951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.488126] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.488135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.488444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.488454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.488744] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.488757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.489055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.489065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.945 [2024-11-05 04:40:44.489230] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.945 [2024-11-05 04:40:44.489239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.945 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.489534] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.489545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.489841] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.489851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.490028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.490038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.490351] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.490363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.490412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.490419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.490587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.490597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.490807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.490818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.491101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.491112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.491406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.491415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.491583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.491595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.491921] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.491932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.492143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.492153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.492561] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.492573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.492758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.492768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.493105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.493115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.493436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.493449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.493615] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.493626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.493816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.493826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.494031] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.494041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.494370] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.494381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.494674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.494686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.494994] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.495005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.495313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.495324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.495591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.495601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.495842] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.495851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.496166] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.496176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.496223] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.496230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.496571] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.496582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.496923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.496934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.497233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.497244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.497471] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.497481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.497790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.497802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.498132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.498141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.498438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.498450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.498798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.498807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.499139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.499149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.499459] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.499469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.499765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.946 [2024-11-05 04:40:44.499775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.946 qpair failed and we were unable to recover it. 00:29:30.946 [2024-11-05 04:40:44.500093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.500102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.500293] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.500303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.500626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.500636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.500811] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.500821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.501153] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.501164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.501474] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.501486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.501819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.501829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.502140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.502152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.502463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.502473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.502766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.502776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.503075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.503086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.503382] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.503393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.503709] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.503721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.504033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.504044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.504357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.504368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.504710] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.504721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.505035] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.505047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.505359] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.505372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.505683] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.505695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.506004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.506017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.506321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.506332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.506640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.506652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.506893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.506905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.507306] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.507318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.507550] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.507562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.507734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.507755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.508047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.508057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.508350] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.508361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.508678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.508689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.509047] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.509056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.509262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.509271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.509568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.509578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.509854] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.509865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.510196] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.510207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.510521] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.510532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.510821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.510831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.511175] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.511185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.511504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.947 [2024-11-05 04:40:44.511515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.947 qpair failed and we were unable to recover it. 00:29:30.947 [2024-11-05 04:40:44.511843] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.511853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.512209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.512219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.512538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.512548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.512900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.512910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.513260] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.513271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.513587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.513597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.513925] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.513935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.514307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.514316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.514626] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.514636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.514920] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.514930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.515267] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.515278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.515594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.515606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.515915] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.515925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.516268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.516279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.516455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.516466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.516795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.516805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.517140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.517151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.517484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.517494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.517820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.517831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.518147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.518159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.518484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.518496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.518852] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.518863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.519141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.519150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.519464] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.519474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.519806] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.519817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.520009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.520019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.520352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.520363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.520670] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.520681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.521017] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.521029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.521369] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.521378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.521693] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.521704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.521981] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.521991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.522261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.522271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.948 [2024-11-05 04:40:44.522453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.948 [2024-11-05 04:40:44.522464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.948 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.522654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.522665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.522965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.522976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.523283] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.523295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.523612] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.523623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.523959] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.523969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.524247] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.524256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.524431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.524441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.524755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.524765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.525082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.525092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.525266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.525275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.525651] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.525661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.525965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.525975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.526287] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.526298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.526606] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.526616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.526911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.526921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.527235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.527245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.527553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.527564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.527881] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.527892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.528216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.528228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.528564] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.528574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.528884] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.528896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.529209] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.529219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.529532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.529544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.529898] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.529908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.530246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.530257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.530594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.530605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.530907] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.530917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.531091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.531100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.531416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.531426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.531643] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.531652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.531997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.532008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.532347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.532359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.532684] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.532694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.532868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.532878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.533080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.533089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.533423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.533432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.533624] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.533632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.533923] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.949 [2024-11-05 04:40:44.533934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.949 qpair failed and we were unable to recover it. 00:29:30.949 [2024-11-05 04:40:44.534261] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.534270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.534647] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.534657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.534850] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.534861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.535080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.535089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.535395] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.535406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.535604] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.535617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.535993] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.536003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.536332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.536344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.536659] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.536669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.536855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.536865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.537093] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.537103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.537426] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.537436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.537722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.537731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.538065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.538076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.538388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.538403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.538733] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.538744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.538935] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.538946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.539140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.539151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.539361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.539371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.539673] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.539685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.539883] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.539894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.540208] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.540219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.540541] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.540552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.540858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.540869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.541177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.541188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.541475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.541484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.541809] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.541819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.542132] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.542144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.542330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.542340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.542533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.542543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.542862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.542873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.543191] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.543201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.543388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.543399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.543732] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.543744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.544078] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.544089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.544388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.544398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.544726] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.544737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.545080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.950 [2024-11-05 04:40:44.545091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.950 qpair failed and we were unable to recover it. 00:29:30.950 [2024-11-05 04:40:44.545409] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.545419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.545593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.545605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.545906] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.545917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.546228] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.546238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.546363] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.546373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.546665] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.546676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.546997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.547009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.547331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.547341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.547508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.547518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.547856] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.547868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.548055] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.548065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.548280] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.548290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.548479] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.548491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:30.951 [2024-11-05 04:40:44.548704] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.951 [2024-11-05 04:40:44.548714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:30.951 qpair failed and we were unable to recover it. 00:29:31.223 [2024-11-05 04:40:44.548905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.223 [2024-11-05 04:40:44.548919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.223 qpair failed and we were unable to recover it. 00:29:31.223 [2024-11-05 04:40:44.549215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.223 [2024-11-05 04:40:44.549228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.223 qpair failed and we were unable to recover it. 00:29:31.223 [2024-11-05 04:40:44.549607] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.549621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.549966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.549977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.550164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.550174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.550536] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.550546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.550846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.550856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.551165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.551176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.551493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.551504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.551847] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.551858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.552177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.552188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.552372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.552383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.552716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.552725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.553061] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.553073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.553393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.553405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.553763] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.553775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.553966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.553976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.554316] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.554328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.554642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.554654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.554979] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.554991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.555322] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.555334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.555686] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.555697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.556054] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.556066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.556249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.556261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.556594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.556606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.556798] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.556810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.557080] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.557092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.557249] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.557262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.557595] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.557607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.557795] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.557807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.557963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.557974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.558164] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.558175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.558357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.558367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.558587] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.558596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.558911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.558921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.558965] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.558974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.559296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.559307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.559466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.559478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.559799] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.224 [2024-11-05 04:40:44.559810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.224 qpair failed and we were unable to recover it. 00:29:31.224 [2024-11-05 04:40:44.560157] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.560168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.560361] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.560372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.560694] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.560704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.560911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.560924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.561257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.561267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.561575] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.561585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.561902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.561913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.562245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.562255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.562568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.562580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.562885] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.562896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.563081] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.563091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.563423] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.563433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.563600] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.563610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.563956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.563966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.564141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.564150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.564330] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.564340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.564657] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.564668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.564984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.564996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.565266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.565276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.565572] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.565583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.565897] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.565908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.566239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.566251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.566567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.566578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.566868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.566878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.567038] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.567049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.567233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.567245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.567566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.567576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.567886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.567896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.568193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.568203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.568456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.568465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.568791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.568802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.569105] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.569115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.569424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.569435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.569699] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.569709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.570021] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.570031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.570217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.570227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.570535] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.570545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.570851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.225 [2024-11-05 04:40:44.570861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.225 qpair failed and we were unable to recover it. 00:29:31.225 [2024-11-05 04:40:44.571137] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.571147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.571446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.571455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.571735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.571758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.572088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.572100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.572454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.572464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.572766] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.572779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.573110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.573120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.573386] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.573396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.573682] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.573692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.573988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.573999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.574321] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.574331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.574671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.574682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.575004] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.575016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.575315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.575325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.575599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.575608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.575902] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.575913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.576241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.576251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.576452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.576462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.576790] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.576802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.577113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.577124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.577454] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.577464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.577752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.577764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.578125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.578135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.578451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.578462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.578636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.578646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.578958] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.578968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.579294] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.579304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.579503] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.579513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.579825] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.579836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.580193] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.580205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.580489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.580499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.580845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.580857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.581178] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.581189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.581476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.581486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.581803] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.581814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.582129] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.582139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.582337] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.582347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.582697] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.226 [2024-11-05 04:40:44.582708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.226 qpair failed and we were unable to recover it. 00:29:31.226 [2024-11-05 04:40:44.583019] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.583030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.583336] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.583345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.583668] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.583680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.583999] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.584010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.584340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.584352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.584663] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.584675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.584963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.584976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.585329] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.585343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.585567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.585580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.585860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.585871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.586183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.586195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.586482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.586493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.586703] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.586713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.587029] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.587040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.587356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.587367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.587671] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.587682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.588049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.588059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.588387] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.588398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.588708] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.588720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.588951] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.588961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.589135] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.589144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.589500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.589511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.589739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.589754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.590041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.590051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.590324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.590333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.590510] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.590520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.590690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.590699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.591022] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.591032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.591349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.591359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.591549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.591559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.591640] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.591648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.591770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.591781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.592102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.592112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.592422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.592432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.592758] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.592768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.593059] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.593069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.593262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.593270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.227 qpair failed and we were unable to recover it. 00:29:31.227 [2024-11-05 04:40:44.593548] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.227 [2024-11-05 04:40:44.593556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.593755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.593765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.594102] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.594111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.594407] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.594415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.594735] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.594747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.594956] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.594965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.595310] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.595320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.595714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.595723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.595945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.595954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.596268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.596278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.596465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.596475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.596672] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.596681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.596873] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.596883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.597203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.597213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.597524] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.597533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.597827] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.597837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.598028] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.598036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.598289] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.598298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.598349] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.598358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.598653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.598662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.598739] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.598748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.599043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.599051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.599245] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.599254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.599570] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.599580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.599787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.599797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.599975] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.599985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.600313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.600322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.600635] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.600644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.600964] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.600973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.601220] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.601227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.601538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.601548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.601730] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.601739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.228 [2024-11-05 04:40:44.602066] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.228 [2024-11-05 04:40:44.602076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.228 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.602269] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.602278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.602399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.602407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.602574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.602582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.602623] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.602631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.602786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.602796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.603086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.603096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.603281] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.603290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.603495] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.603504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.603818] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.603827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.604149] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.604158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.604452] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.604461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.604772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.604780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.604974] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.604983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.605070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.605077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.605271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.605280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.605508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.605517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.605820] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.605828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.606041] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.606051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.606377] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.606387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.606734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.606744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.606932] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.606940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.607271] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.607281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.607549] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.607559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.607826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.607835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.608156] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.608164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.608475] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.608484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.608646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.608655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.608945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.608955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.609262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.609270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.609484] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.609493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.609664] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.609672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.609872] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.609881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.610077] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.610086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.610446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.610556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.611002] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.611101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.611400] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.611438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6020000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.611817] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.611826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.229 [2024-11-05 04:40:44.612117] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.229 [2024-11-05 04:40:44.612126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.229 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.612441] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.612452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.612644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.612653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.612995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.613004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.613334] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.613343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.613498] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.613507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.613819] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.613829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.614204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.614213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.614539] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.614549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.614867] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.614876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.615073] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.615081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.615240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.615248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.615560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.615570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.615755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.615764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.615942] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.615951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.616198] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.616206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.616517] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.616527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.616851] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.616860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.617051] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.617060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.617255] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.617264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.617465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.617477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.617777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.617786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.618139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.618148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.618493] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.618502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.618696] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.618704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.619016] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.619025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.619335] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.619344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.619677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.619686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.619985] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.619995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.620324] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.620334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.620636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.620645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.620937] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.620947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.621125] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.621135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.621356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.621366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.621555] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.621564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:31.230 [2024-11-05 04:40:44.621738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.621751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 [2024-11-05 04:40:44.621963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.621973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:31.230 [2024-11-05 04:40:44.622142] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.230 [2024-11-05 04:40:44.622150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.230 qpair failed and we were unable to recover it. 00:29:31.230 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.231 [2024-11-05 04:40:44.622466] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.622477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:31.231 [2024-11-05 04:40:44.622638] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.622649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.231 [2024-11-05 04:40:44.623049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.623060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.623403] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.623415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.623646] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.623654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.623970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.623980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.624305] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.624312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.624602] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.624610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.624929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.624937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.625239] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.625247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.625408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.625416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.625740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.625754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.626050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.626058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.626233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.626241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.626568] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.626576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.626845] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.626852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.627033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.627040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.627225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.627234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.627565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.627573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.627764] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.627771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.627945] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.627954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.628312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.628319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.628630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.628639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.628828] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.628837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.629190] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.629197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.629272] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.629279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.629444] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.629451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.629628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.629636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.629862] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.629870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.629976] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.629983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.630270] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.630278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.630455] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.630464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.630637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.630645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.630846] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.630853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.631097] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.631105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.631304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.631312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.631636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.231 [2024-11-05 04:40:44.631643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.231 qpair failed and we were unable to recover it. 00:29:31.231 [2024-11-05 04:40:44.631833] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.631841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.632195] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.632203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.632482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.632490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.632839] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.632847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.633143] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.633151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.633482] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.633490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.633711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.633718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.634069] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.634076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.634372] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.634386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.634553] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.634560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.634861] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.634869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.635210] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.635219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.635421] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.635429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.635752] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.635761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.635944] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.635952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.636141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.636149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.636472] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.636482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.636816] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.636825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.637144] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.637154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.637353] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.637361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.637677] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.637684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.637995] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.638002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.638323] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.638331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.638619] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.638628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.638823] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.638831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.639215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.639223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.639507] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.639514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.639716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.639723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.639957] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.639965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.640207] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.640216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.640537] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.640544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.640865] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.640873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.641188] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.641195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.641402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.641409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.641772] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.641780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.641997] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.642004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.642279] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.642287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.642500] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.232 [2024-11-05 04:40:44.642508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.232 qpair failed and we were unable to recover it. 00:29:31.232 [2024-11-05 04:40:44.642756] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.642765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.643101] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.643109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.643431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.643439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.643786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.643793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.644090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.644100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.644416] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.644425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.644616] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.644623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.644886] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.644894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.645225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.645235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.645315] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.645321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.645598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.645606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.645911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.645918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.646233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.646241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.646559] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.646567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.646890] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.646900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.647147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.647154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.647463] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.647471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.647788] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.647796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.648099] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.648106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.648430] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.648438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.648736] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.648743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.648954] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.648961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.649268] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.649275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.649485] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.649499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.649698] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.649705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.649998] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.650007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.650348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.650356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.650642] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.650651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.650970] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.650978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.651266] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.651274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.651566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.651573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.651887] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.651894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.652090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.652098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.233 [2024-11-05 04:40:44.652417] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.233 [2024-11-05 04:40:44.652424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.233 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.652719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.652727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.653043] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.653050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.653347] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.653355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.653675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.653683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.654068] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.654075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.654251] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.654258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.654563] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.654572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.654721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.654728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.655024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.655031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.655356] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.655364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.655628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.655635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.655936] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.655943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.656275] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.656282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.656558] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.656565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.656879] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.656887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.657183] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.657191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.657376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.657383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.657712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.657719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.658005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.658014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.658332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.658340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.658518] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.658527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.658868] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.658875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.659050] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.659056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.659365] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.659373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.659560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.659569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.659855] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.659862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.660225] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.660233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.660578] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.660585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.660893] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.660900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.661063] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.661071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.661246] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.661254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.661574] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.661583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.661755] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.661762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.662088] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.662095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.662424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.662432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.662760] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.662768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.663086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.234 [2024-11-05 04:40:44.663094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.234 qpair failed and we were unable to recover it. 00:29:31.234 [2024-11-05 04:40:44.663411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.663418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.663649] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.663656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.664005] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.664013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.235 [2024-11-05 04:40:44.664331] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.664340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:31.235 [2024-11-05 04:40:44.664690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.664699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.235 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.235 [2024-11-05 04:40:44.665075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.665085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.665438] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.665446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.665838] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.665849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.666171] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.666179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.666376] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.666383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.666660] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.666667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.667025] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.667041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.667250] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.667257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.667422] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.667429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.667786] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.667793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.668001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.668007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.668319] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.668327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.668630] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.668638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.668812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.668819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.668980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.668987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.669262] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.669269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.669497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.669504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.669767] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.669774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.670110] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.670118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.670385] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.670392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.670598] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.670605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.670919] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.670926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.671237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.671244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.671533] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.671540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.671858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.671866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.672058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.672065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.672257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.672263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.672449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.672458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.672805] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.672813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.673024] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.673031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.673226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.673234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.235 [2024-11-05 04:40:44.673393] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.235 [2024-11-05 04:40:44.673401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.235 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.673675] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.673681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.673857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.673865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.674204] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.674211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.674411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.674418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.674794] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.674801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.675104] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.675111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.675427] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.675434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.675722] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.675730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.676161] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.676168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.676446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.676453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.676648] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.676655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.676882] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.676889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.677236] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.677243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.677388] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.677394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.677565] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.677572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.677778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.677786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.677984] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.677991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.678258] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.678265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.678593] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.678601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.678759] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.678767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.678962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.678969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.679233] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.679240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.679538] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.679545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.679737] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.679743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.680139] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.680147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.680332] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.680339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.680713] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.680719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.681032] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.681040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.681355] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.681361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.681596] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.681603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.681777] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.681785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.682103] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.682110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.682476] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.682483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.682761] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.682768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.683065] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.683072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.683113] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.683122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.683408] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.683415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.236 [2024-11-05 04:40:44.683573] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.236 [2024-11-05 04:40:44.683581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.236 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.683911] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.683919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.684147] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.684154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.684489] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.684496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.684791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.684799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.685106] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.685113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.685397] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.685404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.685706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.685713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.686020] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.686027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.686344] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.686352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.686636] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.686643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.686931] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.686939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.687219] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.687226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.687504] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.687512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.687826] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.687833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.688001] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.688009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.688411] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.688418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.688727] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.688735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.689072] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.689079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.689354] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.689361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.689706] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.689713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.689987] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.689995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.690295] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.690302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.690618] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.690626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.690963] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.690970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.691299] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.691306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.691639] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.691647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.691962] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.691970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.692313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.692320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.692605] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.692613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.692912] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.692919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.693235] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.693243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.693540] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.693546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.237 [2024-11-05 04:40:44.693821] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.237 [2024-11-05 04:40:44.693829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.237 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.694128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.694135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.694451] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.694458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 Malloc0 00:29:31.238 [2024-11-05 04:40:44.694773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.694781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.695070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.695078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.695419] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.695428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.238 [2024-11-05 04:40:44.695738] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.695749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:31.238 [2024-11-05 04:40:44.696082] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.696090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.238 [2024-11-05 04:40:44.696360] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.696369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.238 [2024-11-05 04:40:44.696666] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.696674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.696980] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.696987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.697296] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.697304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.697599] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.697607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.697905] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.697912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.698226] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.698234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.698424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.698431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.698588] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.698596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.698773] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.698780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.699154] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.699161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.699457] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.699464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.699778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.699785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.700086] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.700094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.700412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.700419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.700712] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.700720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.701040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.701048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.701346] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.701353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.701567] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.701574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.701858] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.701866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.702138] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.238 [2024-11-05 04:40:44.702203] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.702211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.702560] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.702576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.702814] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.702822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.703003] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.703010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.703340] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.703347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.703679] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.703686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.703988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.703996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.704313] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.704320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.238 [2024-11-05 04:40:44.704532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.238 [2024-11-05 04:40:44.704539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.238 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.704895] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.704902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.705240] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.705247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.705566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.705572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.705910] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.705917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.706111] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.706118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.706449] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.706456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.706787] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.706796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.707124] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.707131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.707468] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.707475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.707765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.707772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.708090] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.708097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.708412] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.708419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.708716] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.708724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.709049] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.709057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.709348] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.709355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.709637] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.709645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.709950] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.709958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.710285] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.710292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.710508] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.710515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.710812] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.710820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.711130] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.711138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.239 [2024-11-05 04:40:44.711456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.711464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:31.239 [2024-11-05 04:40:44.711678] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.711686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.239 [2024-11-05 04:40:44.711988] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.711996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.239 [2024-11-05 04:40:44.712216] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.712224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.712390] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.712397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.712711] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.712719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.713014] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.713021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.713357] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.713372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.713690] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.713697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.713990] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.713998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.714318] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.714327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.714621] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.714629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.714918] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.714925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.715237] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.715245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.715566] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.715573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.239 qpair failed and we were unable to recover it. 00:29:31.239 [2024-11-05 04:40:44.715877] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.239 [2024-11-05 04:40:44.715890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.716215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.716222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.716532] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.716539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.716807] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.716814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.717120] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.717128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.717307] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.717314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.717644] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.717652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.717860] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.717868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.718167] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.718173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.718465] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.718473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.718653] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.718660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.718966] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.718973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.719140] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.719148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.719456] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.719463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.719765] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.719773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.720075] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.720082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.720352] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.720359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.720721] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.720729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.721165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.721171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.721473] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.721481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.721791] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.721798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.722091] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.722099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.722424] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.722431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.722729] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.722737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.722971] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.722977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.240 [2024-11-05 04:40:44.723201] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.723217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.723399] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.723406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:31.240 [2024-11-05 04:40:44.723591] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.723598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.723674] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.723680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.723857] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.723864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.240 [2024-11-05 04:40:44.724177] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.240 [2024-11-05 04:40:44.724185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.724497] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.724504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.724734] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.724741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.724940] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.724950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.725131] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.725138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.725312] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.725319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.240 [2024-11-05 04:40:44.725594] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.240 [2024-11-05 04:40:44.725601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.240 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.725900] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.725908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.726217] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.726224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.726516] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.726524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.726815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.726822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.727040] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.727047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.727259] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.727265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.727589] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.727597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.727778] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.727785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.728123] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.728131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.728436] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.728444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.728748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.728756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.729044] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.729052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.729342] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.729350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.729654] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.729662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.729968] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.729976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.730141] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.730149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.730453] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.730461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.730740] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.730750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.731058] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.731066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.731380] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.731389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.731562] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.731570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.731770] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.731778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.732070] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.732078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.732241] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.732249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.732406] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.732414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.732715] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.732723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.733033] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.733041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.733309] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.733317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.733608] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.733615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.733783] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.733791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.734084] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.734092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.734257] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.734264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.734446] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.734454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.734748] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.734756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.735089] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.735098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.241 [2024-11-05 04:40:44.735402] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.735410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 [2024-11-05 04:40:44.735583] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.241 [2024-11-05 04:40:44.735591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.241 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:31.241 qpair failed and we were unable to recover it. 00:29:31.241 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.242 [2024-11-05 04:40:44.735929] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.735938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.736128] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.736136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.242 [2024-11-05 04:40:44.736304] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.736312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.736628] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.736637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.736967] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.736975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.737170] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.737178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.737303] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.737312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.737714] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.737823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.738215] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.738253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.738467] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.738509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6014000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.738815] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.738823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.739159] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.739167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.739530] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.739538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.739719] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.739726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.740010] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.740018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.740314] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.740322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.740645] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.740653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.741009] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.741017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.741165] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.741173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.741529] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.741537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.741728] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.741736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.742085] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.742093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.742414] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.242 [2024-11-05 04:40:44.742431] posix.c:1055:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.242 [2024-11-05 04:40:44.742439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6018000b90 with addr=10.0.0.2, port=4420 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.242 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:31.242 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:31.242 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:31.242 [2024-11-05 04:40:44.753128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.242 [2024-11-05 04:40:44.753199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.242 [2024-11-05 04:40:44.753213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.242 [2024-11-05 04:40:44.753219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.242 [2024-11-05 04:40:44.753224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.242 [2024-11-05 04:40:44.753239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:31.242 04:40:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3178308 00:29:31.242 [2024-11-05 04:40:44.762932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.242 [2024-11-05 04:40:44.762985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.242 [2024-11-05 04:40:44.762996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.242 [2024-11-05 04:40:44.763001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.242 [2024-11-05 04:40:44.763006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.242 [2024-11-05 04:40:44.763017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.242 qpair failed and we were unable to recover it. 00:29:31.242 [2024-11-05 04:40:44.773034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.242 [2024-11-05 04:40:44.773091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.243 [2024-11-05 04:40:44.773101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.243 [2024-11-05 04:40:44.773106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.243 [2024-11-05 04:40:44.773111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.243 [2024-11-05 04:40:44.773121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.243 qpair failed and we were unable to recover it. 00:29:31.243 [2024-11-05 04:40:44.783060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.243 [2024-11-05 04:40:44.783117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.243 [2024-11-05 04:40:44.783127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.243 [2024-11-05 04:40:44.783132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.243 [2024-11-05 04:40:44.783136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.243 [2024-11-05 04:40:44.783150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.243 qpair failed and we were unable to recover it. 00:29:31.243 [2024-11-05 04:40:44.793014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.243 [2024-11-05 04:40:44.793068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.243 [2024-11-05 04:40:44.793078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.243 [2024-11-05 04:40:44.793083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.243 [2024-11-05 04:40:44.793087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.243 [2024-11-05 04:40:44.793098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.243 qpair failed and we were unable to recover it. 00:29:31.243 [2024-11-05 04:40:44.803032] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.243 [2024-11-05 04:40:44.803080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.243 [2024-11-05 04:40:44.803090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.243 [2024-11-05 04:40:44.803095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.243 [2024-11-05 04:40:44.803099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.243 [2024-11-05 04:40:44.803109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.243 qpair failed and we were unable to recover it. 00:29:31.243 [2024-11-05 04:40:44.813065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.243 [2024-11-05 04:40:44.813114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.243 [2024-11-05 04:40:44.813123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.243 [2024-11-05 04:40:44.813128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.243 [2024-11-05 04:40:44.813133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.243 [2024-11-05 04:40:44.813143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.243 qpair failed and we were unable to recover it. 00:29:31.243 [2024-11-05 04:40:44.823079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.243 [2024-11-05 04:40:44.823129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.243 [2024-11-05 04:40:44.823139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.243 [2024-11-05 04:40:44.823144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.243 [2024-11-05 04:40:44.823148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.243 [2024-11-05 04:40:44.823158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.243 qpair failed and we were unable to recover it. 00:29:31.243 [2024-11-05 04:40:44.833146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.243 [2024-11-05 04:40:44.833198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.243 [2024-11-05 04:40:44.833208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.243 [2024-11-05 04:40:44.833213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.243 [2024-11-05 04:40:44.833217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.243 [2024-11-05 04:40:44.833227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.243 qpair failed and we were unable to recover it. 00:29:31.243 [2024-11-05 04:40:44.843169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.243 [2024-11-05 04:40:44.843222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.243 [2024-11-05 04:40:44.843232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.243 [2024-11-05 04:40:44.843237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.243 [2024-11-05 04:40:44.843241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.243 [2024-11-05 04:40:44.843252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.243 qpair failed and we were unable to recover it. 00:29:31.506 [2024-11-05 04:40:44.853179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.506 [2024-11-05 04:40:44.853226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.506 [2024-11-05 04:40:44.853236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.506 [2024-11-05 04:40:44.853241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.506 [2024-11-05 04:40:44.853246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.506 [2024-11-05 04:40:44.853256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.506 qpair failed and we were unable to recover it. 00:29:31.506 [2024-11-05 04:40:44.863159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.506 [2024-11-05 04:40:44.863210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.863220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.863225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.863229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.863239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.873220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.873269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.873282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.873287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.873291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.873301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.883235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.883288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.883298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.883303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.883307] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.883318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.893284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.893331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.893340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.893345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.893350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.893360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.903273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.903349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.903359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.903364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.903368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.903378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.913209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.913262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.913272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.913276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.913283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.913294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.923247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.923306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.923317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.923322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.923327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.923337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.933375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.933420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.933431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.933436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.933441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.933451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.943527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.943591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.943600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.943605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.943610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.943620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.953516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.953566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.953575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.953580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.953585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.953595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.963490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.963533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.963543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.963548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.963552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.963562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.973540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.973584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.973593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.973598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.973603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.973613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.983594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.983648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.507 [2024-11-05 04:40:44.983658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.507 [2024-11-05 04:40:44.983663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.507 [2024-11-05 04:40:44.983668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.507 [2024-11-05 04:40:44.983678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.507 qpair failed and we were unable to recover it. 00:29:31.507 [2024-11-05 04:40:44.993566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.507 [2024-11-05 04:40:44.993623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:44.993634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:44.993639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:44.993644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:44.993654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.003495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.003594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.003606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.003612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.003616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.003627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.013601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.013651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.013661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.013666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.013671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.013681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.023614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.023677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.023687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.023692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.023698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.023710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.033683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.033738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.033752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.033757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.033762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.033772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.043694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.043744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.043758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.043765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.043770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.043780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.053724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.053775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.053786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.053791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.053796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.053806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.063725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.063783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.063793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.063798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.063803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.063813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.073801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.073857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.073866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.073872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.073876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.073886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.083799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.083854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.083864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.083869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.083874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.083884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.093847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.093933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.093943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.093948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.093953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.093963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.103866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.103914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.103924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.103929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.103934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.103944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.113886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.113935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.113945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.508 [2024-11-05 04:40:45.113950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.508 [2024-11-05 04:40:45.113955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.508 [2024-11-05 04:40:45.113965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.508 qpair failed and we were unable to recover it. 00:29:31.508 [2024-11-05 04:40:45.123912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.508 [2024-11-05 04:40:45.123991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.508 [2024-11-05 04:40:45.124001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.509 [2024-11-05 04:40:45.124006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.509 [2024-11-05 04:40:45.124011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.509 [2024-11-05 04:40:45.124021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.509 qpair failed and we were unable to recover it. 00:29:31.509 [2024-11-05 04:40:45.133813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.509 [2024-11-05 04:40:45.133864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.509 [2024-11-05 04:40:45.133874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.509 [2024-11-05 04:40:45.133879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.509 [2024-11-05 04:40:45.133883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.509 [2024-11-05 04:40:45.133894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.509 qpair failed and we were unable to recover it. 00:29:31.771 [2024-11-05 04:40:45.143982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.771 [2024-11-05 04:40:45.144034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.771 [2024-11-05 04:40:45.144043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.771 [2024-11-05 04:40:45.144049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.771 [2024-11-05 04:40:45.144053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.771 [2024-11-05 04:40:45.144063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-11-05 04:40:45.153988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.771 [2024-11-05 04:40:45.154042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.771 [2024-11-05 04:40:45.154052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.771 [2024-11-05 04:40:45.154057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.771 [2024-11-05 04:40:45.154062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.771 [2024-11-05 04:40:45.154072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-11-05 04:40:45.164050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.771 [2024-11-05 04:40:45.164128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.771 [2024-11-05 04:40:45.164138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.771 [2024-11-05 04:40:45.164143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.771 [2024-11-05 04:40:45.164148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.771 [2024-11-05 04:40:45.164158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-11-05 04:40:45.173973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.771 [2024-11-05 04:40:45.174037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.771 [2024-11-05 04:40:45.174047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.771 [2024-11-05 04:40:45.174055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.771 [2024-11-05 04:40:45.174059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.771 [2024-11-05 04:40:45.174069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.771 qpair failed and we were unable to recover it. 00:29:31.771 [2024-11-05 04:40:45.184094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.771 [2024-11-05 04:40:45.184142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.184152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.184157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.184161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.184171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.193999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.194054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.194065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.194070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.194075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.194086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.204001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.204059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.204069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.204075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.204079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.204089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.214053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.214100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.214110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.214115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.214120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.214133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.224208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.224256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.224266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.224271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.224275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.224285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.234227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.234317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.234327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.234332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.234337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.234347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.244267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.244315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.244325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.244330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.244335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.244345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.254184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.254233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.254242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.254248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.254252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.254262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.264295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.264351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.264361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.264366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.264370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.264380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.274359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.274409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.274419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.274424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.274429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.274439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.284363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.284410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.284420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.284425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.284430] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.284440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.294386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.294437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.294446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.294451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.294456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.294466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.304380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.304430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.304443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.772 [2024-11-05 04:40:45.304448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.772 [2024-11-05 04:40:45.304453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.772 [2024-11-05 04:40:45.304463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.772 qpair failed and we were unable to recover it. 00:29:31.772 [2024-11-05 04:40:45.314441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.772 [2024-11-05 04:40:45.314493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.772 [2024-11-05 04:40:45.314504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.314508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.314513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.314523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-11-05 04:40:45.324485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.773 [2024-11-05 04:40:45.324537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.773 [2024-11-05 04:40:45.324557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.324563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.324568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.324582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-11-05 04:40:45.334493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.773 [2024-11-05 04:40:45.334551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.773 [2024-11-05 04:40:45.334570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.334576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.334582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.334596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-11-05 04:40:45.344570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.773 [2024-11-05 04:40:45.344623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.773 [2024-11-05 04:40:45.344641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.344648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.344653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.344670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-11-05 04:40:45.354591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.773 [2024-11-05 04:40:45.354639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.773 [2024-11-05 04:40:45.354650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.354655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.354660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.354671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-11-05 04:40:45.364577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.773 [2024-11-05 04:40:45.364627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.773 [2024-11-05 04:40:45.364637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.364642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.364647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.364657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-11-05 04:40:45.374514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.773 [2024-11-05 04:40:45.374575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.773 [2024-11-05 04:40:45.374585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.374590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.374594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.374605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-11-05 04:40:45.384683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.773 [2024-11-05 04:40:45.384733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.773 [2024-11-05 04:40:45.384742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.384751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.384756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.384766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-11-05 04:40:45.394676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.773 [2024-11-05 04:40:45.394728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.773 [2024-11-05 04:40:45.394738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.394743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.394752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.394770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:31.773 [2024-11-05 04:40:45.404721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:31.773 [2024-11-05 04:40:45.404770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:31.773 [2024-11-05 04:40:45.404780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:31.773 [2024-11-05 04:40:45.404785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:31.773 [2024-11-05 04:40:45.404789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:31.773 [2024-11-05 04:40:45.404799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:31.773 qpair failed and we were unable to recover it. 00:29:32.036 [2024-11-05 04:40:45.414749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.036 [2024-11-05 04:40:45.414796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.036 [2024-11-05 04:40:45.414805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.036 [2024-11-05 04:40:45.414811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.036 [2024-11-05 04:40:45.414815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.036 [2024-11-05 04:40:45.414825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-11-05 04:40:45.424768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.036 [2024-11-05 04:40:45.424818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.036 [2024-11-05 04:40:45.424828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.036 [2024-11-05 04:40:45.424833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.036 [2024-11-05 04:40:45.424838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.036 [2024-11-05 04:40:45.424848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-11-05 04:40:45.434812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.036 [2024-11-05 04:40:45.434863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.036 [2024-11-05 04:40:45.434880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.036 [2024-11-05 04:40:45.434885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.036 [2024-11-05 04:40:45.434889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.036 [2024-11-05 04:40:45.434900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-11-05 04:40:45.444842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.036 [2024-11-05 04:40:45.444930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.036 [2024-11-05 04:40:45.444940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.036 [2024-11-05 04:40:45.444945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.036 [2024-11-05 04:40:45.444949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.036 [2024-11-05 04:40:45.444960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-11-05 04:40:45.454855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.036 [2024-11-05 04:40:45.454951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.036 [2024-11-05 04:40:45.454961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.036 [2024-11-05 04:40:45.454966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.036 [2024-11-05 04:40:45.454970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.036 [2024-11-05 04:40:45.454980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-11-05 04:40:45.464897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.036 [2024-11-05 04:40:45.464947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.036 [2024-11-05 04:40:45.464957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.036 [2024-11-05 04:40:45.464962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.036 [2024-11-05 04:40:45.464966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.036 [2024-11-05 04:40:45.464976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-11-05 04:40:45.474895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.036 [2024-11-05 04:40:45.474950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.036 [2024-11-05 04:40:45.474960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.036 [2024-11-05 04:40:45.474965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.036 [2024-11-05 04:40:45.474972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.036 [2024-11-05 04:40:45.474982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-11-05 04:40:45.484925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.036 [2024-11-05 04:40:45.484971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.036 [2024-11-05 04:40:45.484980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.036 [2024-11-05 04:40:45.484985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.036 [2024-11-05 04:40:45.484990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.036 [2024-11-05 04:40:45.485000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.036 qpair failed and we were unable to recover it. 00:29:32.036 [2024-11-05 04:40:45.494980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.036 [2024-11-05 04:40:45.495065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.495075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.495079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.495084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.495094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.504912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.504968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.504978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.504982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.504987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.504997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.515055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.515109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.515119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.515124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.515128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.515138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.525069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.525120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.525129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.525134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.525139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.525148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.535089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.535134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.535144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.535149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.535153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.535163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.545103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.545176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.545186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.545191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.545195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.545205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.555153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.555245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.555254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.555259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.555263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.555273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.565174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.565269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.565281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.565286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.565290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.565301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.575205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.575256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.575266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.575271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.575275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.575285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.585233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.585287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.585296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.585301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.585306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.585316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.595271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.595320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.595330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.595335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.595339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.595349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.605289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.605334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.605344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.605351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.605356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.605366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.615315] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.615366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.615376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.037 [2024-11-05 04:40:45.615381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.037 [2024-11-05 04:40:45.615386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.037 [2024-11-05 04:40:45.615396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.037 qpair failed and we were unable to recover it. 00:29:32.037 [2024-11-05 04:40:45.625267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.037 [2024-11-05 04:40:45.625317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.037 [2024-11-05 04:40:45.625327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.038 [2024-11-05 04:40:45.625332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.038 [2024-11-05 04:40:45.625336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.038 [2024-11-05 04:40:45.625346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-11-05 04:40:45.635385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.038 [2024-11-05 04:40:45.635439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.038 [2024-11-05 04:40:45.635449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.038 [2024-11-05 04:40:45.635454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.038 [2024-11-05 04:40:45.635458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.038 [2024-11-05 04:40:45.635468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-11-05 04:40:45.645386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.038 [2024-11-05 04:40:45.645429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.038 [2024-11-05 04:40:45.645442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.038 [2024-11-05 04:40:45.645447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.038 [2024-11-05 04:40:45.645451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.038 [2024-11-05 04:40:45.645463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-11-05 04:40:45.655424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.038 [2024-11-05 04:40:45.655475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.038 [2024-11-05 04:40:45.655485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.038 [2024-11-05 04:40:45.655490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.038 [2024-11-05 04:40:45.655495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.038 [2024-11-05 04:40:45.655505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.038 [2024-11-05 04:40:45.665449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.038 [2024-11-05 04:40:45.665503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.038 [2024-11-05 04:40:45.665512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.038 [2024-11-05 04:40:45.665517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.038 [2024-11-05 04:40:45.665522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.038 [2024-11-05 04:40:45.665532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.038 qpair failed and we were unable to recover it. 00:29:32.300 [2024-11-05 04:40:45.675484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.300 [2024-11-05 04:40:45.675537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.300 [2024-11-05 04:40:45.675556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.300 [2024-11-05 04:40:45.675562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.300 [2024-11-05 04:40:45.675567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.300 [2024-11-05 04:40:45.675581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.300 qpair failed and we were unable to recover it. 00:29:32.300 [2024-11-05 04:40:45.685490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.300 [2024-11-05 04:40:45.685545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.300 [2024-11-05 04:40:45.685564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.300 [2024-11-05 04:40:45.685570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.300 [2024-11-05 04:40:45.685575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.300 [2024-11-05 04:40:45.685589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.300 qpair failed and we were unable to recover it. 00:29:32.300 [2024-11-05 04:40:45.695536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.300 [2024-11-05 04:40:45.695587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.300 [2024-11-05 04:40:45.695598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.300 [2024-11-05 04:40:45.695604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.300 [2024-11-05 04:40:45.695608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.300 [2024-11-05 04:40:45.695619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.300 qpair failed and we were unable to recover it. 00:29:32.300 [2024-11-05 04:40:45.705514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.300 [2024-11-05 04:40:45.705564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.300 [2024-11-05 04:40:45.705575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.300 [2024-11-05 04:40:45.705580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.300 [2024-11-05 04:40:45.705584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.300 [2024-11-05 04:40:45.705595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.300 qpair failed and we were unable to recover it. 00:29:32.300 [2024-11-05 04:40:45.715579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.300 [2024-11-05 04:40:45.715626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.300 [2024-11-05 04:40:45.715636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.300 [2024-11-05 04:40:45.715641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.300 [2024-11-05 04:40:45.715646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.300 [2024-11-05 04:40:45.715656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.300 qpair failed and we were unable to recover it. 00:29:32.300 [2024-11-05 04:40:45.725601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.300 [2024-11-05 04:40:45.725679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.300 [2024-11-05 04:40:45.725689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.300 [2024-11-05 04:40:45.725694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.300 [2024-11-05 04:40:45.725699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.300 [2024-11-05 04:40:45.725709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.300 qpair failed and we were unable to recover it. 00:29:32.300 [2024-11-05 04:40:45.735628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.300 [2024-11-05 04:40:45.735711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.300 [2024-11-05 04:40:45.735721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.300 [2024-11-05 04:40:45.735729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.300 [2024-11-05 04:40:45.735734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.300 [2024-11-05 04:40:45.735744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.300 qpair failed and we were unable to recover it. 00:29:32.300 [2024-11-05 04:40:45.745659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.300 [2024-11-05 04:40:45.745708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.300 [2024-11-05 04:40:45.745718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.300 [2024-11-05 04:40:45.745723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.745727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.745737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.755609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.755658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.755668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.755673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.755677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.755687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.765695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.765739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.765751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.765756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.765761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.765771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.775732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.775817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.775827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.775832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.775836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.775849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.785727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.785779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.785789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.785794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.785798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.785809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.795795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.795875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.795885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.795890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.795894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.795905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.805822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.805869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.805878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.805883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.805888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.805898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.815840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.815894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.815904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.815909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.815913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.815924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.825905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.825962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.825972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.825977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.825981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.825992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.835843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.835889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.835899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.835905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.835909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.835919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.845889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.845932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.845942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.845947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.845951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.845961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.855960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.856041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.856051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.856056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.856061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.856071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.866011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.866060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.866077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.866083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.301 [2024-11-05 04:40:45.866087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.301 [2024-11-05 04:40:45.866097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.301 qpair failed and we were unable to recover it. 00:29:32.301 [2024-11-05 04:40:45.876029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.301 [2024-11-05 04:40:45.876077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.301 [2024-11-05 04:40:45.876086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.301 [2024-11-05 04:40:45.876091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.302 [2024-11-05 04:40:45.876096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.302 [2024-11-05 04:40:45.876106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.302 qpair failed and we were unable to recover it. 00:29:32.302 [2024-11-05 04:40:45.886044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.302 [2024-11-05 04:40:45.886091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.302 [2024-11-05 04:40:45.886101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.302 [2024-11-05 04:40:45.886106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.302 [2024-11-05 04:40:45.886111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.302 [2024-11-05 04:40:45.886121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.302 qpair failed and we were unable to recover it. 00:29:32.302 [2024-11-05 04:40:45.896026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.302 [2024-11-05 04:40:45.896071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.302 [2024-11-05 04:40:45.896081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.302 [2024-11-05 04:40:45.896086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.302 [2024-11-05 04:40:45.896091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.302 [2024-11-05 04:40:45.896101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.302 qpair failed and we were unable to recover it. 00:29:32.302 [2024-11-05 04:40:45.906070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.302 [2024-11-05 04:40:45.906130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.302 [2024-11-05 04:40:45.906139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.302 [2024-11-05 04:40:45.906144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.302 [2024-11-05 04:40:45.906152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.302 [2024-11-05 04:40:45.906162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.302 qpair failed and we were unable to recover it. 00:29:32.302 [2024-11-05 04:40:45.916159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.302 [2024-11-05 04:40:45.916242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.302 [2024-11-05 04:40:45.916252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.302 [2024-11-05 04:40:45.916257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.302 [2024-11-05 04:40:45.916261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.302 [2024-11-05 04:40:45.916271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.302 qpair failed and we were unable to recover it. 00:29:32.302 [2024-11-05 04:40:45.926130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.302 [2024-11-05 04:40:45.926184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.302 [2024-11-05 04:40:45.926194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.302 [2024-11-05 04:40:45.926199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.302 [2024-11-05 04:40:45.926203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.302 [2024-11-05 04:40:45.926214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.302 qpair failed and we were unable to recover it. 00:29:32.302 [2024-11-05 04:40:45.936087] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.302 [2024-11-05 04:40:45.936185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.302 [2024-11-05 04:40:45.936196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.302 [2024-11-05 04:40:45.936201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.302 [2024-11-05 04:40:45.936206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.302 [2024-11-05 04:40:45.936216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.302 qpair failed and we were unable to recover it. 00:29:32.564 [2024-11-05 04:40:45.946208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.564 [2024-11-05 04:40:45.946257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.564 [2024-11-05 04:40:45.946268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.564 [2024-11-05 04:40:45.946273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.564 [2024-11-05 04:40:45.946277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.564 [2024-11-05 04:40:45.946287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.564 qpair failed and we were unable to recover it. 00:29:32.564 [2024-11-05 04:40:45.956226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.564 [2024-11-05 04:40:45.956272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.564 [2024-11-05 04:40:45.956281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.564 [2024-11-05 04:40:45.956286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.564 [2024-11-05 04:40:45.956291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.564 [2024-11-05 04:40:45.956301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.564 qpair failed and we were unable to recover it. 00:29:32.564 [2024-11-05 04:40:45.966227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.564 [2024-11-05 04:40:45.966302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.564 [2024-11-05 04:40:45.966312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.564 [2024-11-05 04:40:45.966317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.564 [2024-11-05 04:40:45.966321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.564 [2024-11-05 04:40:45.966332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.564 qpair failed and we were unable to recover it. 00:29:32.564 [2024-11-05 04:40:45.976285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:45.976329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:45.976339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:45.976344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:45.976348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:45.976358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:45.986334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:45.986384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:45.986393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:45.986398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:45.986403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:45.986413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:45.996356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:45.996406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:45.996419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:45.996424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:45.996428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:45.996439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.006345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.006392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.006402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.006407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.006412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.006422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.016424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.016469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.016478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.016483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.016488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.016498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.026431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.026484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.026493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.026498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.026503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.026513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.036468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.036546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.036564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.036571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.036579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.036593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.046483] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.046534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.046553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.046559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.046564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.046578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.056509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.056601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.056619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.056625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.056631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.056644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.066544] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.066598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.066609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.066614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.066619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.066630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.076575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.076630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.076641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.076646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.076650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.076661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.086599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.086647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.086657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.086662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.086667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.086677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.096619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.565 [2024-11-05 04:40:46.096662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.565 [2024-11-05 04:40:46.096671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.565 [2024-11-05 04:40:46.096676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.565 [2024-11-05 04:40:46.096681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.565 [2024-11-05 04:40:46.096691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.565 qpair failed and we were unable to recover it. 00:29:32.565 [2024-11-05 04:40:46.106651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.106700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.106710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.106715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.106720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.106730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.566 [2024-11-05 04:40:46.116710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.116774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.116784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.116789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.116794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.116804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.566 [2024-11-05 04:40:46.126706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.126760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.126775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.126780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.126785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.126795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.566 [2024-11-05 04:40:46.136703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.136750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.136760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.136765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.136770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.136780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.566 [2024-11-05 04:40:46.146734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.146793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.146803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.146807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.146812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.146822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.566 [2024-11-05 04:40:46.156799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.156849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.156859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.156864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.156868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.156878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.566 [2024-11-05 04:40:46.166738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.166825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.166835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.166843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.166848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.166858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.566 [2024-11-05 04:40:46.176861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.176913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.176922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.176928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.176932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.176942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.566 [2024-11-05 04:40:46.186847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.186894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.186903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.186908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.186913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.186924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.566 [2024-11-05 04:40:46.196885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.566 [2024-11-05 04:40:46.196980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.566 [2024-11-05 04:40:46.196990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.566 [2024-11-05 04:40:46.196995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.566 [2024-11-05 04:40:46.197000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.566 [2024-11-05 04:40:46.197010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.566 qpair failed and we were unable to recover it. 00:29:32.828 [2024-11-05 04:40:46.206987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.828 [2024-11-05 04:40:46.207038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.207048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.207053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.207057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.207067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.216953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.217041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.217050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.217056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.217060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.217070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.227031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.227084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.227093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.227098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.227103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.227113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.236996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.237049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.237058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.237064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.237068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.237078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.246957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.247002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.247011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.247016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.247021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.247030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.257110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.257161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.257171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.257176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.257181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.257191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.267109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.267211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.267221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.267226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.267230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.267240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.277123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.277174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.277183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.277188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.277193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.277203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.287160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.287211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.287221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.287226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.287230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.287240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.297189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.297236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.297245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.297253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.297257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.297267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.307191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.307244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.307254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.307259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.307263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.307273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.317263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.317320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.317330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.317334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.317339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.317349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.829 [2024-11-05 04:40:46.327242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.829 [2024-11-05 04:40:46.327291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.829 [2024-11-05 04:40:46.327301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.829 [2024-11-05 04:40:46.327306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.829 [2024-11-05 04:40:46.327310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.829 [2024-11-05 04:40:46.327320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.829 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.337313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.337403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.337413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.337418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.337422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.337436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.347311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.347363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.347373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.347377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.347382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.347392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.357365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.357417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.357426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.357432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.357436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.357447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.367359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.367406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.367416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.367421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.367425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.367436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.377299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.377346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.377356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.377361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.377365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.377376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.387415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.387471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.387481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.387487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.387491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.387502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.397476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.397529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.397539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.397545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.397550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.397560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.407480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.407527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.407537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.407542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.407546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.407557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.417509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.417564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.417574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.417579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.417583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.417593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.427545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.427593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.427605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.427610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.427615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.427625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.437586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.437637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.437647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.437652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.437656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.437666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.447594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.447674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.447683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.447688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.447693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.447703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:32.830 [2024-11-05 04:40:46.457629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:32.830 [2024-11-05 04:40:46.457676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:32.830 [2024-11-05 04:40:46.457685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:32.830 [2024-11-05 04:40:46.457690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:32.830 [2024-11-05 04:40:46.457695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:32.830 [2024-11-05 04:40:46.457705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:32.830 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.467622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.467675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.467684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.467689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.467697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.467707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.477703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.477756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.477766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.477771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.477776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.477786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.487673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.487722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.487732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.487737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.487741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.487755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.497734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.497791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.497802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.497807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.497811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.497822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.507770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.507863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.507874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.507879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.507883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.507894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.517816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.517870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.517879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.517884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.517889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.517899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.527813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.527896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.527906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.527910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.527915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.527925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.537889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.537935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.537945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.537950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.537955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.537965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.547888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.547940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.547950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.547955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.547959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.547969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.557945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.558001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.558014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.558019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.558023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.558033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.567935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.568012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.568022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.568027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.568031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.568042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.093 [2024-11-05 04:40:46.577959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.093 [2024-11-05 04:40:46.578006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.093 [2024-11-05 04:40:46.578017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.093 [2024-11-05 04:40:46.578022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.093 [2024-11-05 04:40:46.578026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.093 [2024-11-05 04:40:46.578037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.093 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.588041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.588091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.588102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.588107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.588111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.588122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.598045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.598119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.598128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.598133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.598140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.598150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.608018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.608062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.608072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.608077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.608081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.608092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.618054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.618101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.618111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.618116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.618121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.618131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.628095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.628181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.628191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.628196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.628201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.628211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.638136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.638184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.638194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.638199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.638203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.638213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.648175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.648225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.648235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.648240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.648244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.648255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.658185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.658237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.658246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.658251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.658256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.658266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.668222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.668269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.668279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.668284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.668289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.668299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.678255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.678312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.678321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.678326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.678331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.678341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.688269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.688312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.688325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.688330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.688334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.688345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.698296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.698351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.698361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.698366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.698371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.698381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.094 [2024-11-05 04:40:46.708320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.094 [2024-11-05 04:40:46.708367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.094 [2024-11-05 04:40:46.708376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.094 [2024-11-05 04:40:46.708381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.094 [2024-11-05 04:40:46.708386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.094 [2024-11-05 04:40:46.708396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.094 qpair failed and we were unable to recover it. 00:29:33.095 [2024-11-05 04:40:46.718377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.095 [2024-11-05 04:40:46.718429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.095 [2024-11-05 04:40:46.718439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.095 [2024-11-05 04:40:46.718445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.095 [2024-11-05 04:40:46.718450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.095 [2024-11-05 04:40:46.718459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.095 qpair failed and we were unable to recover it. 00:29:33.095 [2024-11-05 04:40:46.728376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.095 [2024-11-05 04:40:46.728421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.095 [2024-11-05 04:40:46.728431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.095 [2024-11-05 04:40:46.728439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.095 [2024-11-05 04:40:46.728443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.095 [2024-11-05 04:40:46.728453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.095 qpair failed and we were unable to recover it. 00:29:33.357 [2024-11-05 04:40:46.738375] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.357 [2024-11-05 04:40:46.738419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.357 [2024-11-05 04:40:46.738430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.357 [2024-11-05 04:40:46.738435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.357 [2024-11-05 04:40:46.738440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.357 [2024-11-05 04:40:46.738450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.357 qpair failed and we were unable to recover it. 00:29:33.357 [2024-11-05 04:40:46.748458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.748506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.748516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.748521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.748525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.748535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.758492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.758547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.758557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.758562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.758566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.758577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.768512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.768566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.768576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.768581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.768586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.768599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.778545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.778590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.778600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.778605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.778610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.778620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.788574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.788625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.788635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.788640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.788644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.788655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.798603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.798662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.798672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.798677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.798682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.798692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.808618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.808663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.808673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.808678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.808682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.808692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.818640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.818732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.818743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.818752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.818757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.818767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.828689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.828739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.828752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.828757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.828762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.828772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.838721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.838816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.838826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.838831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.838836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.838846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.848737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.848794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.848803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.848808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.848813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.848823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.858651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.858710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.858721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.858731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.858735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.858749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.868805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.358 [2024-11-05 04:40:46.868856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.358 [2024-11-05 04:40:46.868866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.358 [2024-11-05 04:40:46.868871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.358 [2024-11-05 04:40:46.868876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.358 [2024-11-05 04:40:46.868886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.358 qpair failed and we were unable to recover it. 00:29:33.358 [2024-11-05 04:40:46.878834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.878887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.878897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.878902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.878906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.878916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.888848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.888898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.888908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.888913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.888918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.888928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.898887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.898944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.898954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.898960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.898964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.898978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.908906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.909008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.909018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.909023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.909027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.909037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.918963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.919016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.919026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.919031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.919035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.919045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.929016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.929073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.929098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.929104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.929108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.929126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.938873] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.938939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.938950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.938955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.938959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.938970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.949140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.949199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.949209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.949214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.949218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.949229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.959061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.959110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.959120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.959125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.959130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.959140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.969122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.969179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.969189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.969194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.969198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.969208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.979147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.979195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.979205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.979210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.979214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.979224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.359 [2024-11-05 04:40:46.989105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.359 [2024-11-05 04:40:46.989164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.359 [2024-11-05 04:40:46.989177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.359 [2024-11-05 04:40:46.989182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.359 [2024-11-05 04:40:46.989187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.359 [2024-11-05 04:40:46.989197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.359 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:46.999156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:46.999202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:46.999212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:46.999217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:46.999221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:46.999231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.009068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.009120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.009130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:47.009135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:47.009139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:47.009150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.019115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.019177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.019189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:47.019194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:47.019199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:47.019210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.029307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.029394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.029404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:47.029409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:47.029417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:47.029427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.039171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.039214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.039224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:47.039229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:47.039234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:47.039244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.049216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.049316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.049325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:47.049330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:47.049335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:47.049345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.059224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.059273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.059282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:47.059288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:47.059292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:47.059302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.069355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.069403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.069413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:47.069418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:47.069423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:47.069433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.079373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.079421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.079431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:47.079436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:47.079441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:47.079451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.089313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.089361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.089372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.622 [2024-11-05 04:40:47.089377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.622 [2024-11-05 04:40:47.089381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.622 [2024-11-05 04:40:47.089392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.622 qpair failed and we were unable to recover it. 00:29:33.622 [2024-11-05 04:40:47.099440] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.622 [2024-11-05 04:40:47.099490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.622 [2024-11-05 04:40:47.099500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.099505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.099509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.099520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.109455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.109503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.109513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.109518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.109523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.109533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.119462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.119505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.119518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.119523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.119528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.119538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.129516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.129564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.129575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.129581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.129585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.129595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.139527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.139600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.139610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.139615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.139620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.139630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.149569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.149617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.149627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.149632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.149637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.149647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.159588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.159631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.159641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.159646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.159653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.159664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.169640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.169688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.169698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.169702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.169707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.169717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.179662] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.179720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.179729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.179734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.179739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.179753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.189684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.189735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.189749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.189754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.189759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.189769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.199693] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.199763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.199773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.199778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.199783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.199793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.209752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.209796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.209806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.209811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.209816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.209826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.219774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.219818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.219828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.623 [2024-11-05 04:40:47.219833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.623 [2024-11-05 04:40:47.219838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.623 [2024-11-05 04:40:47.219848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.623 qpair failed and we were unable to recover it. 00:29:33.623 [2024-11-05 04:40:47.229812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.623 [2024-11-05 04:40:47.229906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.623 [2024-11-05 04:40:47.229915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-05 04:40:47.229921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-05 04:40:47.229925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.624 [2024-11-05 04:40:47.229935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-05 04:40:47.239815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-05 04:40:47.239858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-05 04:40:47.239868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-05 04:40:47.239873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-05 04:40:47.239877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.624 [2024-11-05 04:40:47.239887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.624 [2024-11-05 04:40:47.249847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.624 [2024-11-05 04:40:47.249891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.624 [2024-11-05 04:40:47.249903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.624 [2024-11-05 04:40:47.249908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.624 [2024-11-05 04:40:47.249913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.624 [2024-11-05 04:40:47.249923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.624 qpair failed and we were unable to recover it. 00:29:33.886 [2024-11-05 04:40:47.259885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.886 [2024-11-05 04:40:47.259933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.886 [2024-11-05 04:40:47.259943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.259948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.259952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.259963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.269920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.269972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.269981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.269986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.269991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.270001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.279926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.279974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.279983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.279988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.279993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.280003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.289984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.290031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.290041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.290050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.290054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.290064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.299872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.299921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.299931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.299936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.299940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.299950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.309956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.310012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.310022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.310027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.310032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.310042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.320014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.320063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.320073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.320078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.320083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.320092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.330104] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.330150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.330160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.330165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.330169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.330181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.340094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.340148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.340158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.340163] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.340168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.340178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.350148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.350195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.350206] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.350211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.350215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.350225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.360140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.360188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.360198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.360203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.360207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.360217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.370195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.370236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.370246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.370251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.370255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.370265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.380237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.380287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.887 [2024-11-05 04:40:47.380297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.887 [2024-11-05 04:40:47.380302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.887 [2024-11-05 04:40:47.380306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.887 [2024-11-05 04:40:47.380316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.887 qpair failed and we were unable to recover it. 00:29:33.887 [2024-11-05 04:40:47.390211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.887 [2024-11-05 04:40:47.390254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.390263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.390268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.390273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.390283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.400198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.400238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.400248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.400252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.400257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.400267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.410304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.410343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.410353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.410358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.410362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.410372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.420333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.420416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.420426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.420434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.420438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.420448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.430318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.430359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.430369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.430374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.430379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.430389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.440278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.440331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.440343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.440348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.440353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.440363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.450417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.450462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.450472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.450477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.450482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.450492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.460438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.460480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.460490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.460495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.460499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.460512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.470439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.470514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.470524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.470529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.470534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.470544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.480477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.480521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.480531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.480536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.480541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.480552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.490400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.490440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.490451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.490455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.490460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.490470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.500555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.500596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.500606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.500611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.500615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.500625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.510500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.888 [2024-11-05 04:40:47.510542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.888 [2024-11-05 04:40:47.510553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.888 [2024-11-05 04:40:47.510558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.888 [2024-11-05 04:40:47.510563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.888 [2024-11-05 04:40:47.510573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.888 qpair failed and we were unable to recover it. 00:29:33.888 [2024-11-05 04:40:47.520543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:33.889 [2024-11-05 04:40:47.520585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:33.889 [2024-11-05 04:40:47.520595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:33.889 [2024-11-05 04:40:47.520600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.889 [2024-11-05 04:40:47.520604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:33.889 [2024-11-05 04:40:47.520614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.889 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.530590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.530639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.530649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.530654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.530658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.530668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.540651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.540692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.540702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.540707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.540712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.540722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.550648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.550697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.550710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.550715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.550720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.550730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.560607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.560651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.560661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.560666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.560671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.560681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.570732] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.570783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.570793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.570798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.570802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.570812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.580762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.580803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.580812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.580818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.580822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.580832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.590745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.590802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.590812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.590817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.590824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.590835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.600788] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.600827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.600836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.600841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.600846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.600856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.610867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.610941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.610951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.610956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.610961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.610971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.620946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.621000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.621010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.621014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.621019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.621030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.630834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.151 [2024-11-05 04:40:47.630876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.151 [2024-11-05 04:40:47.630886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.151 [2024-11-05 04:40:47.630891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.151 [2024-11-05 04:40:47.630896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.151 [2024-11-05 04:40:47.630906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.151 qpair failed and we were unable to recover it. 00:29:34.151 [2024-11-05 04:40:47.640913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.640957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.640967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.640972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.640976] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.640987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.650966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.651009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.651020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.651025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.651029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.651040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.660851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.660892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.660902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.660907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.660912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.660922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.670847] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.670890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.670901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.670906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.670910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.670921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.680999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.681044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.681057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.681062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.681066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.681076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.691069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.691119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.691129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.691134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.691138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.691148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.701051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.701094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.701104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.701108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.701113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.701123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.711064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.711111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.711121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.711126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.711131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.711140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.721169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.721237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.721247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.721252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.721260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.721270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.731129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.731183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.731193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.731198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.731203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.731213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.741200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.741245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.741255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.741260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.741264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.741274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.751145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.751187] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.751197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.751202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.751206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.751216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.761206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.761255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.152 [2024-11-05 04:40:47.761264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.152 [2024-11-05 04:40:47.761269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.152 [2024-11-05 04:40:47.761274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.152 [2024-11-05 04:40:47.761284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.152 qpair failed and we were unable to recover it. 00:29:34.152 [2024-11-05 04:40:47.771190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.152 [2024-11-05 04:40:47.771237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.153 [2024-11-05 04:40:47.771246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.153 [2024-11-05 04:40:47.771251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.153 [2024-11-05 04:40:47.771256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.153 [2024-11-05 04:40:47.771266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.153 qpair failed and we were unable to recover it. 00:29:34.153 [2024-11-05 04:40:47.781300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.153 [2024-11-05 04:40:47.781342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.153 [2024-11-05 04:40:47.781351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.153 [2024-11-05 04:40:47.781357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.153 [2024-11-05 04:40:47.781361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.153 [2024-11-05 04:40:47.781371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.153 qpair failed and we were unable to recover it. 00:29:34.414 [2024-11-05 04:40:47.791294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.414 [2024-11-05 04:40:47.791334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.414 [2024-11-05 04:40:47.791344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.414 [2024-11-05 04:40:47.791349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.414 [2024-11-05 04:40:47.791354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.414 [2024-11-05 04:40:47.791363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.414 qpair failed and we were unable to recover it. 00:29:34.414 [2024-11-05 04:40:47.801323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.414 [2024-11-05 04:40:47.801368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.414 [2024-11-05 04:40:47.801378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.414 [2024-11-05 04:40:47.801383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.414 [2024-11-05 04:40:47.801387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.414 [2024-11-05 04:40:47.801397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.414 qpair failed and we were unable to recover it. 00:29:34.414 [2024-11-05 04:40:47.811351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.414 [2024-11-05 04:40:47.811429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.414 [2024-11-05 04:40:47.811441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.414 [2024-11-05 04:40:47.811446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.414 [2024-11-05 04:40:47.811450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.414 [2024-11-05 04:40:47.811460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.821428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.821481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.821490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.821495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.821500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.821509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.831404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.831457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.831467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.831472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.831476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.831486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.841428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.841475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.841485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.841490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.841494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.841504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.851469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.851513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.851523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.851530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.851535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.851545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.861389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.861482] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.861492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.861497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.861502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.861511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.871484] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.871527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.871537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.871542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.871546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.871556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.881540] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.881582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.881592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.881597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.881602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.881612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.891580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.891626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.891635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.891640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.891645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.891659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.901532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.901579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.901589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.901594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.901598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.901608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.911602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.911683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.911692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.911697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.911702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.911712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.921641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.921683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.921693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.921698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.921702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.415 [2024-11-05 04:40:47.921712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.415 qpair failed and we were unable to recover it. 00:29:34.415 [2024-11-05 04:40:47.931637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.415 [2024-11-05 04:40:47.931684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.415 [2024-11-05 04:40:47.931694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.415 [2024-11-05 04:40:47.931699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.415 [2024-11-05 04:40:47.931704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:47.931713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:47.941719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:47.941768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:47.941778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:47.941783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:47.941788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:47.941798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:47.951677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:47.951716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:47.951726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:47.951731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:47.951735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:47.951749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:47.961742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:47.961795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:47.961804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:47.961809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:47.961814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:47.961824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:47.971799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:47.971843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:47.971853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:47.971858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:47.971862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:47.971872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:47.981824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:47.981869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:47.981879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:47.981886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:47.981891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:47.981901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:47.991822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:47.991864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:47.991874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:47.991879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:47.991883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:47.991893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:48.001876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:48.001917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:48.001927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:48.001932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:48.001936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:48.001946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:48.011887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:48.011934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:48.011943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:48.011948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:48.011953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:48.011963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:48.021807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:48.021861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:48.021871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:48.021876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:48.021880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:48.021893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:48.031898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:48.031939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:48.031948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:48.031953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:48.031958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:48.031968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.416 [2024-11-05 04:40:48.041968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.416 [2024-11-05 04:40:48.042008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.416 [2024-11-05 04:40:48.042017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.416 [2024-11-05 04:40:48.042022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.416 [2024-11-05 04:40:48.042027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.416 [2024-11-05 04:40:48.042036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.416 qpair failed and we were unable to recover it. 00:29:34.681 [2024-11-05 04:40:48.052035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.682 [2024-11-05 04:40:48.052102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.682 [2024-11-05 04:40:48.052112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.682 [2024-11-05 04:40:48.052117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.682 [2024-11-05 04:40:48.052122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.682 [2024-11-05 04:40:48.052132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.682 qpair failed and we were unable to recover it. 00:29:34.682 [2024-11-05 04:40:48.062034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.682 [2024-11-05 04:40:48.062077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.682 [2024-11-05 04:40:48.062087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.682 [2024-11-05 04:40:48.062092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.682 [2024-11-05 04:40:48.062096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.682 [2024-11-05 04:40:48.062106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.682 qpair failed and we were unable to recover it. 00:29:34.682 [2024-11-05 04:40:48.072027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.682 [2024-11-05 04:40:48.072071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.682 [2024-11-05 04:40:48.072081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.682 [2024-11-05 04:40:48.072086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.682 [2024-11-05 04:40:48.072090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.682 [2024-11-05 04:40:48.072101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.682 qpair failed and we were unable to recover it. 00:29:34.682 [2024-11-05 04:40:48.082063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.682 [2024-11-05 04:40:48.082109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.682 [2024-11-05 04:40:48.082119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.682 [2024-11-05 04:40:48.082124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.682 [2024-11-05 04:40:48.082129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.682 [2024-11-05 04:40:48.082139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.682 qpair failed and we were unable to recover it. 00:29:34.682 [2024-11-05 04:40:48.092107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.682 [2024-11-05 04:40:48.092149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.682 [2024-11-05 04:40:48.092159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.682 [2024-11-05 04:40:48.092164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.682 [2024-11-05 04:40:48.092169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.682 [2024-11-05 04:40:48.092179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.682 qpair failed and we were unable to recover it. 00:29:34.682 [2024-11-05 04:40:48.102136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.682 [2024-11-05 04:40:48.102181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.682 [2024-11-05 04:40:48.102191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.682 [2024-11-05 04:40:48.102196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.682 [2024-11-05 04:40:48.102200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.682 [2024-11-05 04:40:48.102210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.682 qpair failed and we were unable to recover it. 00:29:34.682 [2024-11-05 04:40:48.112134] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.683 [2024-11-05 04:40:48.112178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.683 [2024-11-05 04:40:48.112190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.683 [2024-11-05 04:40:48.112195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.683 [2024-11-05 04:40:48.112200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.683 [2024-11-05 04:40:48.112209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.683 qpair failed and we were unable to recover it. 00:29:34.683 [2024-11-05 04:40:48.122039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.683 [2024-11-05 04:40:48.122078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.683 [2024-11-05 04:40:48.122087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.683 [2024-11-05 04:40:48.122092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.683 [2024-11-05 04:40:48.122097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.683 [2024-11-05 04:40:48.122106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.683 qpair failed and we were unable to recover it. 00:29:34.683 [2024-11-05 04:40:48.132233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.683 [2024-11-05 04:40:48.132299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.683 [2024-11-05 04:40:48.132310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.683 [2024-11-05 04:40:48.132315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.683 [2024-11-05 04:40:48.132320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.683 [2024-11-05 04:40:48.132330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.683 qpair failed and we were unable to recover it. 00:29:34.683 [2024-11-05 04:40:48.142209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.683 [2024-11-05 04:40:48.142254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.683 [2024-11-05 04:40:48.142264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.683 [2024-11-05 04:40:48.142269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.683 [2024-11-05 04:40:48.142273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.683 [2024-11-05 04:40:48.142283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.683 qpair failed and we were unable to recover it. 00:29:34.683 [2024-11-05 04:40:48.152241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.683 [2024-11-05 04:40:48.152282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.683 [2024-11-05 04:40:48.152292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.683 [2024-11-05 04:40:48.152297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.683 [2024-11-05 04:40:48.152305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.683 [2024-11-05 04:40:48.152315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.683 qpair failed and we were unable to recover it. 00:29:34.683 [2024-11-05 04:40:48.162274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.683 [2024-11-05 04:40:48.162315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.683 [2024-11-05 04:40:48.162325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.683 [2024-11-05 04:40:48.162330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.683 [2024-11-05 04:40:48.162335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.683 [2024-11-05 04:40:48.162344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.683 qpair failed and we were unable to recover it. 00:29:34.683 [2024-11-05 04:40:48.172339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.683 [2024-11-05 04:40:48.172427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.683 [2024-11-05 04:40:48.172437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.683 [2024-11-05 04:40:48.172442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.683 [2024-11-05 04:40:48.172447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.683 [2024-11-05 04:40:48.172457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.683 qpair failed and we were unable to recover it. 00:29:34.684 [2024-11-05 04:40:48.182334] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.684 [2024-11-05 04:40:48.182379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.684 [2024-11-05 04:40:48.182390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.684 [2024-11-05 04:40:48.182395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.684 [2024-11-05 04:40:48.182399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.684 [2024-11-05 04:40:48.182409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.684 qpair failed and we were unable to recover it. 00:29:34.684 [2024-11-05 04:40:48.192360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.684 [2024-11-05 04:40:48.192402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.684 [2024-11-05 04:40:48.192411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.684 [2024-11-05 04:40:48.192416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.684 [2024-11-05 04:40:48.192421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.684 [2024-11-05 04:40:48.192431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.684 qpair failed and we were unable to recover it. 00:29:34.684 [2024-11-05 04:40:48.202339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.684 [2024-11-05 04:40:48.202380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.684 [2024-11-05 04:40:48.202390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.684 [2024-11-05 04:40:48.202395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.684 [2024-11-05 04:40:48.202399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.684 [2024-11-05 04:40:48.202409] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.684 qpair failed and we were unable to recover it. 00:29:34.684 [2024-11-05 04:40:48.212436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.684 [2024-11-05 04:40:48.212478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.684 [2024-11-05 04:40:48.212489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.684 [2024-11-05 04:40:48.212494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.684 [2024-11-05 04:40:48.212499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.684 [2024-11-05 04:40:48.212509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.684 qpair failed and we were unable to recover it. 00:29:34.684 [2024-11-05 04:40:48.222450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.684 [2024-11-05 04:40:48.222495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.684 [2024-11-05 04:40:48.222505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.684 [2024-11-05 04:40:48.222509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.684 [2024-11-05 04:40:48.222514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.684 [2024-11-05 04:40:48.222524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.684 qpair failed and we were unable to recover it. 00:29:34.684 [2024-11-05 04:40:48.232470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.684 [2024-11-05 04:40:48.232511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.684 [2024-11-05 04:40:48.232520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.684 [2024-11-05 04:40:48.232525] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.684 [2024-11-05 04:40:48.232530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.684 [2024-11-05 04:40:48.232540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.684 qpair failed and we were unable to recover it. 00:29:34.684 [2024-11-05 04:40:48.242493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.684 [2024-11-05 04:40:48.242542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.684 [2024-11-05 04:40:48.242554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.684 [2024-11-05 04:40:48.242559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.685 [2024-11-05 04:40:48.242564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.685 [2024-11-05 04:40:48.242574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.685 qpair failed and we were unable to recover it. 00:29:34.685 [2024-11-05 04:40:48.252546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.685 [2024-11-05 04:40:48.252599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.685 [2024-11-05 04:40:48.252617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.685 [2024-11-05 04:40:48.252624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.685 [2024-11-05 04:40:48.252629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.685 [2024-11-05 04:40:48.252642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.685 qpair failed and we were unable to recover it. 00:29:34.685 [2024-11-05 04:40:48.262545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.685 [2024-11-05 04:40:48.262592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.685 [2024-11-05 04:40:48.262603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.685 [2024-11-05 04:40:48.262608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.685 [2024-11-05 04:40:48.262613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.685 [2024-11-05 04:40:48.262624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.685 qpair failed and we were unable to recover it. 00:29:34.685 [2024-11-05 04:40:48.272565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.685 [2024-11-05 04:40:48.272610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.685 [2024-11-05 04:40:48.272620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.685 [2024-11-05 04:40:48.272625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.685 [2024-11-05 04:40:48.272630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.685 [2024-11-05 04:40:48.272640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.685 qpair failed and we were unable to recover it. 00:29:34.685 [2024-11-05 04:40:48.282615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.685 [2024-11-05 04:40:48.282656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.685 [2024-11-05 04:40:48.282666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.685 [2024-11-05 04:40:48.282671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.685 [2024-11-05 04:40:48.282679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.685 [2024-11-05 04:40:48.282690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.685 qpair failed and we were unable to recover it. 00:29:34.685 [2024-11-05 04:40:48.292646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.685 [2024-11-05 04:40:48.292692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.685 [2024-11-05 04:40:48.292702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.685 [2024-11-05 04:40:48.292707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.685 [2024-11-05 04:40:48.292711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.685 [2024-11-05 04:40:48.292721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.689 qpair failed and we were unable to recover it. 00:29:34.689 [2024-11-05 04:40:48.302667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.689 [2024-11-05 04:40:48.302713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.689 [2024-11-05 04:40:48.302723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.689 [2024-11-05 04:40:48.302728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.689 [2024-11-05 04:40:48.302732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.690 [2024-11-05 04:40:48.302743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.690 qpair failed and we were unable to recover it. 00:29:34.690 [2024-11-05 04:40:48.312661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.690 [2024-11-05 04:40:48.312704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.690 [2024-11-05 04:40:48.312714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.690 [2024-11-05 04:40:48.312720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.690 [2024-11-05 04:40:48.312724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.690 [2024-11-05 04:40:48.312734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.690 qpair failed and we were unable to recover it. 00:29:34.952 [2024-11-05 04:40:48.322719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.322765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.322775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.322780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.322785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.322795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.332780] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.332823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.332832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.332837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.332842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.332852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.342667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.342727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.342736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.342742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.342749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.342759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.352785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.352832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.352842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.352847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.352852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.352862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.362835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.362875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.362885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.362890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.362895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.362905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.372850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.372891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.372904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.372909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.372913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.372923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.382855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.382893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.382903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.382908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.382912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.382922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.392894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.392949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.392959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.392964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.392968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.392978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.402920] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.402962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.402972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.402977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.402981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.402991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.412993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.413040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.413050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.413057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.413062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.413072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.422956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.422994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.423003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.423008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.953 [2024-11-05 04:40:48.423013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.953 [2024-11-05 04:40:48.423023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.953 qpair failed and we were unable to recover it. 00:29:34.953 [2024-11-05 04:40:48.432993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.953 [2024-11-05 04:40:48.433034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.953 [2024-11-05 04:40:48.433044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.953 [2024-11-05 04:40:48.433049] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.433053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.433063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.443062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.443101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.443111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.443116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.443120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.443130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.453110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.453152] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.453162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.453167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.453172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.453185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.463054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.463089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.463098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.463103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.463108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.463118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.473135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.473215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.473225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.473230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.473234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.473245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.483156] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.483224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.483234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.483239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.483243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.483254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.493096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.493136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.493147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.493153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.493157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.493168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.503206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.503245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.503256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.503261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.503265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.503275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.513245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.513287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.513296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.513301] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.513306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.513316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.523251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.523291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.523301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.523306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.523310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.523320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.533227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.533273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.533283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.533287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.533292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.533302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.543320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.543362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.543371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.954 [2024-11-05 04:40:48.543379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.954 [2024-11-05 04:40:48.543384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.954 [2024-11-05 04:40:48.543394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.954 qpair failed and we were unable to recover it. 00:29:34.954 [2024-11-05 04:40:48.553348] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.954 [2024-11-05 04:40:48.553390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.954 [2024-11-05 04:40:48.553400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.955 [2024-11-05 04:40:48.553405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.955 [2024-11-05 04:40:48.553410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.955 [2024-11-05 04:40:48.553420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.955 qpair failed and we were unable to recover it. 00:29:34.955 [2024-11-05 04:40:48.563377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.955 [2024-11-05 04:40:48.563425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.955 [2024-11-05 04:40:48.563435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.955 [2024-11-05 04:40:48.563439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.955 [2024-11-05 04:40:48.563444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.955 [2024-11-05 04:40:48.563454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.955 qpair failed and we were unable to recover it. 00:29:34.955 [2024-11-05 04:40:48.573491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.955 [2024-11-05 04:40:48.573539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.955 [2024-11-05 04:40:48.573549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.955 [2024-11-05 04:40:48.573554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.955 [2024-11-05 04:40:48.573558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.955 [2024-11-05 04:40:48.573568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.955 qpair failed and we were unable to recover it. 00:29:34.955 [2024-11-05 04:40:48.583425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:34.955 [2024-11-05 04:40:48.583512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:34.955 [2024-11-05 04:40:48.583522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:34.955 [2024-11-05 04:40:48.583527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:34.955 [2024-11-05 04:40:48.583531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:34.955 [2024-11-05 04:40:48.583543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.955 qpair failed and we were unable to recover it. 00:29:35.217 [2024-11-05 04:40:48.593450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.217 [2024-11-05 04:40:48.593492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.217 [2024-11-05 04:40:48.593503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.217 [2024-11-05 04:40:48.593508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.217 [2024-11-05 04:40:48.593513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.217 [2024-11-05 04:40:48.593523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.217 qpair failed and we were unable to recover it. 00:29:35.217 [2024-11-05 04:40:48.603481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.217 [2024-11-05 04:40:48.603519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.217 [2024-11-05 04:40:48.603529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.217 [2024-11-05 04:40:48.603534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.217 [2024-11-05 04:40:48.603538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.217 [2024-11-05 04:40:48.603548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.217 qpair failed and we were unable to recover it. 00:29:35.217 [2024-11-05 04:40:48.613496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.217 [2024-11-05 04:40:48.613538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.217 [2024-11-05 04:40:48.613548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.217 [2024-11-05 04:40:48.613553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.217 [2024-11-05 04:40:48.613558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.217 [2024-11-05 04:40:48.613568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.217 qpair failed and we were unable to recover it. 00:29:35.217 [2024-11-05 04:40:48.623516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.217 [2024-11-05 04:40:48.623568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.217 [2024-11-05 04:40:48.623579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.217 [2024-11-05 04:40:48.623584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.217 [2024-11-05 04:40:48.623588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.217 [2024-11-05 04:40:48.623598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.217 qpair failed and we were unable to recover it. 00:29:35.217 [2024-11-05 04:40:48.633515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.217 [2024-11-05 04:40:48.633557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.217 [2024-11-05 04:40:48.633567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.217 [2024-11-05 04:40:48.633572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.217 [2024-11-05 04:40:48.633577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.217 [2024-11-05 04:40:48.633587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.217 qpair failed and we were unable to recover it. 00:29:35.217 [2024-11-05 04:40:48.643602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.217 [2024-11-05 04:40:48.643643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.217 [2024-11-05 04:40:48.643653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.217 [2024-11-05 04:40:48.643658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.217 [2024-11-05 04:40:48.643662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.217 [2024-11-05 04:40:48.643672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.653628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.653681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.653691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.653696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.653700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.653710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.663643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.663678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.663688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.663693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.663697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.663707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.673647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.673687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.673700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.673705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.673709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.673719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.683708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.683756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.683767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.683772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.683776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.683787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.693761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.693844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.693854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.693859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.693864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.693874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.703741] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.703783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.703793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.703798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.703802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.703812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.713817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.713883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.713893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.713898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.713905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.713916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.723795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.723839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.723849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.723854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.723858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.723868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.733862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.733904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.733913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.733918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.733923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.733933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.743834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.743875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.743885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.743890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.743895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.743905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.753880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.753921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.753931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.753936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.753940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.753951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.763910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.763951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.763961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.763966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.763970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.763980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.773975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.218 [2024-11-05 04:40:48.774022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.218 [2024-11-05 04:40:48.774032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.218 [2024-11-05 04:40:48.774037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.218 [2024-11-05 04:40:48.774042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.218 [2024-11-05 04:40:48.774052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.218 qpair failed and we were unable to recover it. 00:29:35.218 [2024-11-05 04:40:48.783970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.219 [2024-11-05 04:40:48.784051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.219 [2024-11-05 04:40:48.784061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.219 [2024-11-05 04:40:48.784066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.219 [2024-11-05 04:40:48.784070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.219 [2024-11-05 04:40:48.784080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.219 qpair failed and we were unable to recover it. 00:29:35.219 [2024-11-05 04:40:48.793974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.219 [2024-11-05 04:40:48.794023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.219 [2024-11-05 04:40:48.794032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.219 [2024-11-05 04:40:48.794037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.219 [2024-11-05 04:40:48.794041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.219 [2024-11-05 04:40:48.794051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.219 qpair failed and we were unable to recover it. 00:29:35.219 [2024-11-05 04:40:48.804026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.219 [2024-11-05 04:40:48.804065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.219 [2024-11-05 04:40:48.804077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.219 [2024-11-05 04:40:48.804082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.219 [2024-11-05 04:40:48.804087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.219 [2024-11-05 04:40:48.804097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.219 qpair failed and we were unable to recover it. 00:29:35.219 [2024-11-05 04:40:48.814058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.219 [2024-11-05 04:40:48.814112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.219 [2024-11-05 04:40:48.814121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.219 [2024-11-05 04:40:48.814126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.219 [2024-11-05 04:40:48.814131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.219 [2024-11-05 04:40:48.814141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.219 qpair failed and we were unable to recover it. 00:29:35.219 [2024-11-05 04:40:48.824064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.219 [2024-11-05 04:40:48.824102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.219 [2024-11-05 04:40:48.824111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.219 [2024-11-05 04:40:48.824116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.219 [2024-11-05 04:40:48.824121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.219 [2024-11-05 04:40:48.824131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.219 qpair failed and we were unable to recover it. 00:29:35.219 [2024-11-05 04:40:48.834075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.219 [2024-11-05 04:40:48.834115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.219 [2024-11-05 04:40:48.834124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.219 [2024-11-05 04:40:48.834130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.219 [2024-11-05 04:40:48.834134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.219 [2024-11-05 04:40:48.834144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.219 qpair failed and we were unable to recover it. 00:29:35.219 [2024-11-05 04:40:48.844129] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.219 [2024-11-05 04:40:48.844206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.219 [2024-11-05 04:40:48.844216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.219 [2024-11-05 04:40:48.844221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.219 [2024-11-05 04:40:48.844229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.219 [2024-11-05 04:40:48.844239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.219 qpair failed and we were unable to recover it. 00:29:35.219 [2024-11-05 04:40:48.854174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.219 [2024-11-05 04:40:48.854220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.219 [2024-11-05 04:40:48.854229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.219 [2024-11-05 04:40:48.854234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.219 [2024-11-05 04:40:48.854239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.219 [2024-11-05 04:40:48.854249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.219 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.864168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.864211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.864221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.864226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.864231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.864240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.874177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.874218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.874228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.874233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.874238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.874248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.884238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.884282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.884292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.884297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.884301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.884311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.894159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.894203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.894212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.894217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.894222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.894232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.904262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.904315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.904324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.904329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.904334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.904343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.914274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.914315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.914325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.914329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.914334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.914344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.924327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.924367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.924376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.924381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.924385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.924395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.934396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.934443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.934453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.934458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.934462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.934472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.944368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.944418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.944428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.944433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.944437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.944447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.954395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.954436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.954445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.954451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.954455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.954465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.964433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.481 [2024-11-05 04:40:48.964490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.481 [2024-11-05 04:40:48.964499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.481 [2024-11-05 04:40:48.964504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.481 [2024-11-05 04:40:48.964509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.481 [2024-11-05 04:40:48.964519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.481 qpair failed and we were unable to recover it. 00:29:35.481 [2024-11-05 04:40:48.974498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:48.974540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:48.974550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:48.974558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:48.974562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:48.974573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:48.984478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:48.984517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:48.984531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:48.984536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:48.984541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:48.984555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:48.994470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:48.994512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:48.994522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:48.994527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:48.994532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:48.994542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.004436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.004527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.004537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.004542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.004547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.004557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.014569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.014611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.014629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.014635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.014640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.014658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.024586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.024622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.024634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.024639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.024643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.024654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.034616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.034657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.034667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.034672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.034677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.034687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.044652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.044694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.044704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.044709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.044714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.044724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.054672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.054744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.054758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.054763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.054768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.054778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.064691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.064728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.064738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.064743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.064751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.064761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.074724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.074779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.074789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.074794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.074799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.074809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.084733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.084777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.084788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.084793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.084797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.084807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.482 qpair failed and we were unable to recover it. 00:29:35.482 [2024-11-05 04:40:49.094796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.482 [2024-11-05 04:40:49.094838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.482 [2024-11-05 04:40:49.094849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.482 [2024-11-05 04:40:49.094854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.482 [2024-11-05 04:40:49.094858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.482 [2024-11-05 04:40:49.094868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.483 qpair failed and we were unable to recover it. 00:29:35.483 [2024-11-05 04:40:49.104806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.483 [2024-11-05 04:40:49.104842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.483 [2024-11-05 04:40:49.104852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.483 [2024-11-05 04:40:49.104863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.483 [2024-11-05 04:40:49.104868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.483 [2024-11-05 04:40:49.104878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.483 qpair failed and we were unable to recover it. 00:29:35.483 [2024-11-05 04:40:49.114846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.483 [2024-11-05 04:40:49.114887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.483 [2024-11-05 04:40:49.114897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.483 [2024-11-05 04:40:49.114902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.483 [2024-11-05 04:40:49.114906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.483 [2024-11-05 04:40:49.114916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.483 qpair failed and we were unable to recover it. 00:29:35.744 [2024-11-05 04:40:49.124857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.744 [2024-11-05 04:40:49.124900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.744 [2024-11-05 04:40:49.124910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.744 [2024-11-05 04:40:49.124915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.744 [2024-11-05 04:40:49.124920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.744 [2024-11-05 04:40:49.124930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.744 qpair failed and we were unable to recover it. 00:29:35.744 [2024-11-05 04:40:49.134868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.744 [2024-11-05 04:40:49.134907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.744 [2024-11-05 04:40:49.134916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.744 [2024-11-05 04:40:49.134921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.744 [2024-11-05 04:40:49.134926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.744 [2024-11-05 04:40:49.134936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.744 qpair failed and we were unable to recover it. 00:29:35.744 [2024-11-05 04:40:49.144782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.744 [2024-11-05 04:40:49.144821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.744 [2024-11-05 04:40:49.144831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.744 [2024-11-05 04:40:49.144836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.744 [2024-11-05 04:40:49.144841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.744 [2024-11-05 04:40:49.144854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.744 qpair failed and we were unable to recover it. 00:29:35.744 [2024-11-05 04:40:49.154926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.744 [2024-11-05 04:40:49.154967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.744 [2024-11-05 04:40:49.154977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.744 [2024-11-05 04:40:49.154982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.744 [2024-11-05 04:40:49.154986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.154997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.164979] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.165059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.165068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.165073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.165078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.165088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.175021] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.175059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.175069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.175074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.175078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.175088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.185040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.185078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.185088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.185093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.185098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.185108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.195168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.195211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.195221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.195226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.195230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.195240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.205069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.205112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.205121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.205126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.205131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.205141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.215097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.215141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.215151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.215155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.215160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.215170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.224992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.225034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.225045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.225050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.225055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.225065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.235135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.235177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.235190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.235195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.235200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.235210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.245179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.245221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.245231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.245237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.245241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.245251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.255067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.255104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.255114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.255119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.255123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.255133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.265233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.265271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.265280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.265285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.265290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.265300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.275250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.275291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.275301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.275306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.275314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.275324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.745 [2024-11-05 04:40:49.285278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.745 [2024-11-05 04:40:49.285323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.745 [2024-11-05 04:40:49.285333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.745 [2024-11-05 04:40:49.285339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.745 [2024-11-05 04:40:49.285343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.745 [2024-11-05 04:40:49.285353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.745 qpair failed and we were unable to recover it. 00:29:35.746 [2024-11-05 04:40:49.295296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.746 [2024-11-05 04:40:49.295338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.746 [2024-11-05 04:40:49.295348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.746 [2024-11-05 04:40:49.295352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.746 [2024-11-05 04:40:49.295357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.746 [2024-11-05 04:40:49.295368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.746 qpair failed and we were unable to recover it. 00:29:35.746 [2024-11-05 04:40:49.305319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.746 [2024-11-05 04:40:49.305360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.746 [2024-11-05 04:40:49.305370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.746 [2024-11-05 04:40:49.305375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.746 [2024-11-05 04:40:49.305379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.746 [2024-11-05 04:40:49.305389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.746 qpair failed and we were unable to recover it. 00:29:35.746 [2024-11-05 04:40:49.315347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.746 [2024-11-05 04:40:49.315391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.746 [2024-11-05 04:40:49.315400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.746 [2024-11-05 04:40:49.315405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.746 [2024-11-05 04:40:49.315410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.746 [2024-11-05 04:40:49.315419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.746 qpair failed and we were unable to recover it. 00:29:35.746 [2024-11-05 04:40:49.325389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.746 [2024-11-05 04:40:49.325426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.746 [2024-11-05 04:40:49.325436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.746 [2024-11-05 04:40:49.325441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.746 [2024-11-05 04:40:49.325445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.746 [2024-11-05 04:40:49.325455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.746 qpair failed and we were unable to recover it. 00:29:35.746 [2024-11-05 04:40:49.335407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.746 [2024-11-05 04:40:49.335450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.746 [2024-11-05 04:40:49.335468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.746 [2024-11-05 04:40:49.335474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.746 [2024-11-05 04:40:49.335479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.746 [2024-11-05 04:40:49.335494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.746 qpair failed and we were unable to recover it. 00:29:35.746 [2024-11-05 04:40:49.345383] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.746 [2024-11-05 04:40:49.345424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.746 [2024-11-05 04:40:49.345443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.746 [2024-11-05 04:40:49.345450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.746 [2024-11-05 04:40:49.345455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.746 [2024-11-05 04:40:49.345469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.746 qpair failed and we were unable to recover it. 00:29:35.746 [2024-11-05 04:40:49.355438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.746 [2024-11-05 04:40:49.355484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.746 [2024-11-05 04:40:49.355502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.746 [2024-11-05 04:40:49.355509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.746 [2024-11-05 04:40:49.355514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.746 [2024-11-05 04:40:49.355527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.746 qpair failed and we were unable to recover it. 00:29:35.746 [2024-11-05 04:40:49.365485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.746 [2024-11-05 04:40:49.365536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.746 [2024-11-05 04:40:49.365557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.746 [2024-11-05 04:40:49.365564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.746 [2024-11-05 04:40:49.365569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.746 [2024-11-05 04:40:49.365583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.746 qpair failed and we were unable to recover it. 00:29:35.746 [2024-11-05 04:40:49.375536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:35.746 [2024-11-05 04:40:49.375577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:35.746 [2024-11-05 04:40:49.375588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:35.746 [2024-11-05 04:40:49.375593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:35.746 [2024-11-05 04:40:49.375598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:35.746 [2024-11-05 04:40:49.375609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:35.746 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.385525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.385565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.385575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.385580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.385585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.385595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.395547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.395595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.395605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.395610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.395614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.395625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.405464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.405511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.405522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.405527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.405535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.405546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.415606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.415646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.415657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.415662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.415666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.415676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.425494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.425533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.425543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.425549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.425553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.425564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.435650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.435693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.435703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.435708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.435712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.435722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.445671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.445715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.445725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.445730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.445735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.445745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.455699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.455737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.455751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.455756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.455760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.455771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.465717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.465758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.465768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.465773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.465778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.465788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.475784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.475853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.475863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.475868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.475872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.475882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.485805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.485848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.485859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.485864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.485868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.485879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.495839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.009 [2024-11-05 04:40:49.495882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.009 [2024-11-05 04:40:49.495892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.009 [2024-11-05 04:40:49.495897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.009 [2024-11-05 04:40:49.495901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.009 [2024-11-05 04:40:49.495911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.009 qpair failed and we were unable to recover it. 00:29:36.009 [2024-11-05 04:40:49.505838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.505874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.505883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.505888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.505893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.505903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.515898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.515939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.515948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.515953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.515958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.515968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.525931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.525972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.525982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.525987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.525991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.526001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.535816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.535853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.535863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.535871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.535875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.535885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.545952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.545994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.546003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.546008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.546013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.546022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.556065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.556107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.556117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.556122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.556127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.556136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.566068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.566114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.566124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.566129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.566133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.566143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.576015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.576057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.576066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.576071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.576075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.576088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.586065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.586117] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.586127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.586132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.586136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.586146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.596078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.596124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.596133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.596138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.596143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.596153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.606140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.606186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.606196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.606200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.606205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.606215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.616124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.616165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.616174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.616179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.616183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.616193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.626213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.010 [2024-11-05 04:40:49.626296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.010 [2024-11-05 04:40:49.626306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.010 [2024-11-05 04:40:49.626311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.010 [2024-11-05 04:40:49.626315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.010 [2024-11-05 04:40:49.626325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.010 qpair failed and we were unable to recover it. 00:29:36.010 [2024-11-05 04:40:49.636198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.011 [2024-11-05 04:40:49.636239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.011 [2024-11-05 04:40:49.636248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.011 [2024-11-05 04:40:49.636254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.011 [2024-11-05 04:40:49.636259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.011 [2024-11-05 04:40:49.636269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.011 qpair failed and we were unable to recover it. 00:29:36.281 [2024-11-05 04:40:49.646199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.281 [2024-11-05 04:40:49.646245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.281 [2024-11-05 04:40:49.646255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.281 [2024-11-05 04:40:49.646260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.281 [2024-11-05 04:40:49.646264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.281 [2024-11-05 04:40:49.646274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.281 qpair failed and we were unable to recover it. 00:29:36.281 [2024-11-05 04:40:49.656252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.281 [2024-11-05 04:40:49.656288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.281 [2024-11-05 04:40:49.656298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.281 [2024-11-05 04:40:49.656303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.281 [2024-11-05 04:40:49.656308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.281 [2024-11-05 04:40:49.656318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.281 qpair failed and we were unable to recover it. 00:29:36.281 [2024-11-05 04:40:49.666270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.281 [2024-11-05 04:40:49.666338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.281 [2024-11-05 04:40:49.666351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.281 [2024-11-05 04:40:49.666356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.281 [2024-11-05 04:40:49.666360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.281 [2024-11-05 04:40:49.666371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.281 qpair failed and we were unable to recover it. 00:29:36.281 [2024-11-05 04:40:49.676276] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.281 [2024-11-05 04:40:49.676319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.281 [2024-11-05 04:40:49.676329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.281 [2024-11-05 04:40:49.676334] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.281 [2024-11-05 04:40:49.676338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.676349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.686345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.686389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.686400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.686405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.686410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.686420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.696354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.696399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.696410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.696415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.696419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.696429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.706388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.706426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.706436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.706441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.706445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.706457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.716432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.716485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.716495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.716500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.716504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.716514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.726506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.726552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.726561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.726566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.726571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.726581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.736474] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.736512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.736521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.736526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.736531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.736540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.746491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.746531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.746540] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.746545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.746549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.746559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.756532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.756573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.756583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.756588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.756592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.756602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.766562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.766611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.766621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.766625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.766630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.766640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.776583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.776621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.776630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.776635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.776640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.776649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.786601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.786643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.786654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.786659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.786664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.786674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.796633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.796676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.796689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.796694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.282 [2024-11-05 04:40:49.796699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.282 [2024-11-05 04:40:49.796709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.282 qpair failed and we were unable to recover it. 00:29:36.282 [2024-11-05 04:40:49.806678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.282 [2024-11-05 04:40:49.806756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.282 [2024-11-05 04:40:49.806767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.282 [2024-11-05 04:40:49.806772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.806776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.806787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.816697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.816737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.816750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.816755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.816759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.816769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.826688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.826751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.826761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.826766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.826770] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.826780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.836743] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.836788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.836798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.836803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.836813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.836824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.846781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.846830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.846841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.846846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.846851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.846862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.856781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.856821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.856831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.856836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.856841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.856851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.866842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.866883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.866893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.866898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.866903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.866913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.876853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.876892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.876902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.876907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.876912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.876922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.886759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.886803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.886814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.886819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.886823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.886834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.896878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.896917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.896927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.896932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.896937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.896947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.906910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.906948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.906958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.906963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.906967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.906977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.283 [2024-11-05 04:40:49.916962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.283 [2024-11-05 04:40:49.917004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.283 [2024-11-05 04:40:49.917014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.283 [2024-11-05 04:40:49.917019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.283 [2024-11-05 04:40:49.917023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.283 [2024-11-05 04:40:49.917033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.283 qpair failed and we were unable to recover it. 00:29:36.544 [2024-11-05 04:40:49.926984] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.544 [2024-11-05 04:40:49.927030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.544 [2024-11-05 04:40:49.927042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.544 [2024-11-05 04:40:49.927047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.544 [2024-11-05 04:40:49.927052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.544 [2024-11-05 04:40:49.927062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.544 qpair failed and we were unable to recover it. 00:29:36.544 [2024-11-05 04:40:49.936985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.544 [2024-11-05 04:40:49.937023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.544 [2024-11-05 04:40:49.937032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.544 [2024-11-05 04:40:49.937037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.544 [2024-11-05 04:40:49.937042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.544 [2024-11-05 04:40:49.937052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.544 qpair failed and we were unable to recover it. 00:29:36.544 [2024-11-05 04:40:49.947038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.544 [2024-11-05 04:40:49.947078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.544 [2024-11-05 04:40:49.947088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.544 [2024-11-05 04:40:49.947093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:49.947098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:49.947108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:49.957052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:49.957091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:49.957101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:49.957106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:49.957111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:49.957120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:49.966967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:49.967020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:49.967030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:49.967037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:49.967042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:49.967052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:49.976989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:49.977054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:49.977064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:49.977068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:49.977073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:49.977083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:49.987126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:49.987179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:49.987188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:49.987193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:49.987198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:49.987208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:49.997158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:49.997213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:49.997222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:49.997227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:49.997232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:49.997242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:50.007078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:50.007124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:50.007135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:50.007140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:50.007144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:50.007155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:50.017167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:50.017209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:50.017218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:50.017223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:50.017228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:50.017238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:50.027180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:50.027220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:50.027229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:50.027235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:50.027240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:50.027250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:50.037262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:50.037302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:50.037312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:50.037318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:50.037323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:50.037334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:50.047306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:50.047347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:50.047357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:50.047363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:50.047367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:50.047379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:50.057321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:50.057364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:50.057374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:50.057379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:50.057384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6018000b90 00:29:36.545 [2024-11-05 04:40:50.057394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:50.057645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12eae00 is same with the state(6) to be set 00:29:36.545 [2024-11-05 04:40:50.067349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:50.067447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:50.067511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:50.067538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.545 [2024-11-05 04:40:50.067560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6014000b90 00:29:36.545 [2024-11-05 04:40:50.067615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.545 qpair failed and we were unable to recover it. 00:29:36.545 [2024-11-05 04:40:50.077391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.545 [2024-11-05 04:40:50.077497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.545 [2024-11-05 04:40:50.077561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.545 [2024-11-05 04:40:50.077588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.546 [2024-11-05 04:40:50.077609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6014000b90 00:29:36.546 [2024-11-05 04:40:50.077663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:36.546 qpair failed and we were unable to recover it. 00:29:36.546 [2024-11-05 04:40:50.087593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.546 [2024-11-05 04:40:50.087698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.546 [2024-11-05 04:40:50.087773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.546 [2024-11-05 04:40:50.087800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.546 [2024-11-05 04:40:50.087821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:29:36.546 [2024-11-05 04:40:50.087876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.546 qpair failed and we were unable to recover it. 00:29:36.546 [2024-11-05 04:40:50.097430] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.546 [2024-11-05 04:40:50.097492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.546 [2024-11-05 04:40:50.097538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.546 [2024-11-05 04:40:50.097555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.546 [2024-11-05 04:40:50.097570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f6020000b90 00:29:36.546 [2024-11-05 04:40:50.097608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.546 qpair failed and we were unable to recover it. 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Write completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 Read completed with error (sct=0, sc=8) 00:29:36.546 starting I/O failed 00:29:36.546 [2024-11-05 04:40:50.098057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.546 [2024-11-05 04:40:50.107460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.546 [2024-11-05 04:40:50.107510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.546 [2024-11-05 04:40:50.107530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.546 [2024-11-05 04:40:50.107539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.546 [2024-11-05 04:40:50.107546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f50c0 00:29:36.546 [2024-11-05 04:40:50.107563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.546 qpair failed and we were unable to recover it. 00:29:36.546 [2024-11-05 04:40:50.117456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:36.546 [2024-11-05 04:40:50.117504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:36.546 [2024-11-05 04:40:50.117523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:36.546 [2024-11-05 04:40:50.117530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:36.546 [2024-11-05 04:40:50.117537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x12f50c0 00:29:36.546 [2024-11-05 04:40:50.117552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:36.546 qpair failed and we were unable to recover it. 00:29:36.546 [2024-11-05 04:40:50.118089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12eae00 (9): Bad file descriptor 00:29:36.546 Initializing NVMe Controllers 00:29:36.546 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.546 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:36.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:36.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:36.546 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:36.546 Initialization complete. Launching workers. 00:29:36.546 Starting thread on core 1 00:29:36.546 Starting thread on core 2 00:29:36.546 Starting thread on core 3 00:29:36.546 Starting thread on core 0 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:36.546 00:29:36.546 real 0m11.461s 00:29:36.546 user 0m21.636s 00:29:36.546 sys 0m3.683s 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:36.546 ************************************ 00:29:36.546 END TEST nvmf_target_disconnect_tc2 00:29:36.546 ************************************ 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:36.546 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:36.807 rmmod nvme_tcp 00:29:36.807 rmmod nvme_fabrics 00:29:36.807 rmmod nvme_keyring 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 3179208 ']' 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 3179208 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 3179208 ']' 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 3179208 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3179208 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3179208' 00:29:36.807 killing process with pid 3179208 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 3179208 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 3179208 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:36.807 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:37.068 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.068 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.068 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.068 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.068 04:40:50 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.979 04:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:38.979 00:29:38.979 real 0m21.479s 00:29:38.979 user 0m49.593s 00:29:38.979 sys 0m9.548s 00:29:38.979 04:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:38.979 04:40:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:38.979 ************************************ 00:29:38.979 END TEST nvmf_target_disconnect 00:29:38.979 ************************************ 00:29:38.979 04:40:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:29:38.979 00:29:38.979 real 6m29.036s 00:29:38.979 user 11m22.125s 00:29:38.979 sys 2m10.143s 00:29:38.979 04:40:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:38.979 04:40:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:38.979 ************************************ 00:29:38.979 END TEST nvmf_host 00:29:38.979 ************************************ 00:29:38.979 04:40:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:29:38.979 04:40:52 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:29:38.979 04:40:52 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:38.979 04:40:52 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:38.979 04:40:52 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:38.979 04:40:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:39.240 ************************************ 00:29:39.240 START TEST nvmf_target_core_interrupt_mode 00:29:39.240 ************************************ 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:29:39.240 * Looking for test storage... 00:29:39.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.240 --rc genhtml_branch_coverage=1 00:29:39.240 --rc genhtml_function_coverage=1 00:29:39.240 --rc genhtml_legend=1 00:29:39.240 --rc geninfo_all_blocks=1 00:29:39.240 --rc geninfo_unexecuted_blocks=1 00:29:39.240 00:29:39.240 ' 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.240 --rc genhtml_branch_coverage=1 00:29:39.240 --rc genhtml_function_coverage=1 00:29:39.240 --rc genhtml_legend=1 00:29:39.240 --rc geninfo_all_blocks=1 00:29:39.240 --rc geninfo_unexecuted_blocks=1 00:29:39.240 00:29:39.240 ' 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.240 --rc genhtml_branch_coverage=1 00:29:39.240 --rc genhtml_function_coverage=1 00:29:39.240 --rc genhtml_legend=1 00:29:39.240 --rc geninfo_all_blocks=1 00:29:39.240 --rc geninfo_unexecuted_blocks=1 00:29:39.240 00:29:39.240 ' 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:39.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.240 --rc genhtml_branch_coverage=1 00:29:39.240 --rc genhtml_function_coverage=1 00:29:39.240 --rc genhtml_legend=1 00:29:39.240 --rc geninfo_all_blocks=1 00:29:39.240 --rc geninfo_unexecuted_blocks=1 00:29:39.240 00:29:39.240 ' 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.240 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.241 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.502 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:39.502 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:29:39.502 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:29:39.502 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:39.502 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:39.502 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:39.502 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:39.502 ************************************ 00:29:39.502 START TEST nvmf_abort 00:29:39.502 ************************************ 00:29:39.502 04:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:29:39.502 * Looking for test storage... 00:29:39.502 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:29:39.502 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:39.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.503 --rc genhtml_branch_coverage=1 00:29:39.503 --rc genhtml_function_coverage=1 00:29:39.503 --rc genhtml_legend=1 00:29:39.503 --rc geninfo_all_blocks=1 00:29:39.503 --rc geninfo_unexecuted_blocks=1 00:29:39.503 00:29:39.503 ' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:39.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.503 --rc genhtml_branch_coverage=1 00:29:39.503 --rc genhtml_function_coverage=1 00:29:39.503 --rc genhtml_legend=1 00:29:39.503 --rc geninfo_all_blocks=1 00:29:39.503 --rc geninfo_unexecuted_blocks=1 00:29:39.503 00:29:39.503 ' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:39.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.503 --rc genhtml_branch_coverage=1 00:29:39.503 --rc genhtml_function_coverage=1 00:29:39.503 --rc genhtml_legend=1 00:29:39.503 --rc geninfo_all_blocks=1 00:29:39.503 --rc geninfo_unexecuted_blocks=1 00:29:39.503 00:29:39.503 ' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:39.503 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:39.503 --rc genhtml_branch_coverage=1 00:29:39.503 --rc genhtml_function_coverage=1 00:29:39.503 --rc genhtml_legend=1 00:29:39.503 --rc geninfo_all_blocks=1 00:29:39.503 --rc geninfo_unexecuted_blocks=1 00:29:39.503 00:29:39.503 ' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:39.503 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:29:39.764 04:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:47.902 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:47.902 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.902 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:47.903 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:47.903 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.603 ms 00:29:47.903 00:29:47.903 --- 10.0.0.2 ping statistics --- 00:29:47.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.903 rtt min/avg/max/mdev = 0.603/0.603/0.603/0.000 ms 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:29:47.903 00:29:47.903 --- 10.0.0.1 ping statistics --- 00:29:47.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.903 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=3184735 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 3184735 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 3184735 ']' 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:47.903 04:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.903 [2024-11-05 04:41:00.524264] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:47.903 [2024-11-05 04:41:00.525405] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:29:47.903 [2024-11-05 04:41:00.525460] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.903 [2024-11-05 04:41:00.614326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:47.903 [2024-11-05 04:41:00.670401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.903 [2024-11-05 04:41:00.670460] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.903 [2024-11-05 04:41:00.670472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.903 [2024-11-05 04:41:00.670487] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.903 [2024-11-05 04:41:00.670494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.903 [2024-11-05 04:41:00.672387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:47.903 [2024-11-05 04:41:00.672554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:47.903 [2024-11-05 04:41:00.672556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.903 [2024-11-05 04:41:00.742553] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:47.903 [2024-11-05 04:41:00.742607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:47.903 [2024-11-05 04:41:00.743327] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:47.903 [2024-11-05 04:41:00.743624] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:47.903 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:47.903 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:29:47.903 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.903 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:47.903 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.903 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.904 [2024-11-05 04:41:01.429578] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.904 Malloc0 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.904 Delay0 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.904 [2024-11-05 04:41:01.521424] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:47.904 04:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:29:48.165 [2024-11-05 04:41:01.649254] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:29:50.706 Initializing NVMe Controllers 00:29:50.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:50.706 controller IO queue size 128 less than required 00:29:50.706 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:29:50.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:29:50.706 Initialization complete. Launching workers. 00:29:50.706 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28887 00:29:50.706 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28944, failed to submit 66 00:29:50.706 success 28887, unsuccessful 57, failed 0 00:29:50.706 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:50.706 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:50.706 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:50.706 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:50.706 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:29:50.706 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:29:50.706 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:50.707 rmmod nvme_tcp 00:29:50.707 rmmod nvme_fabrics 00:29:50.707 rmmod nvme_keyring 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 3184735 ']' 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 3184735 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 3184735 ']' 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 3184735 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3184735 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3184735' 00:29:50.707 killing process with pid 3184735 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 3184735 00:29:50.707 04:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 3184735 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.707 04:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.618 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:52.618 00:29:52.618 real 0m13.275s 00:29:52.618 user 0m11.241s 00:29:52.618 sys 0m6.757s 00:29:52.618 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:52.618 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:29:52.618 ************************************ 00:29:52.618 END TEST nvmf_abort 00:29:52.618 ************************************ 00:29:52.618 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:52.618 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:29:52.618 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:52.618 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:52.879 ************************************ 00:29:52.879 START TEST nvmf_ns_hotplug_stress 00:29:52.879 ************************************ 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:29:52.879 * Looking for test storage... 00:29:52.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:52.879 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:52.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.880 --rc genhtml_branch_coverage=1 00:29:52.880 --rc genhtml_function_coverage=1 00:29:52.880 --rc genhtml_legend=1 00:29:52.880 --rc geninfo_all_blocks=1 00:29:52.880 --rc geninfo_unexecuted_blocks=1 00:29:52.880 00:29:52.880 ' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:52.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.880 --rc genhtml_branch_coverage=1 00:29:52.880 --rc genhtml_function_coverage=1 00:29:52.880 --rc genhtml_legend=1 00:29:52.880 --rc geninfo_all_blocks=1 00:29:52.880 --rc geninfo_unexecuted_blocks=1 00:29:52.880 00:29:52.880 ' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:52.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.880 --rc genhtml_branch_coverage=1 00:29:52.880 --rc genhtml_function_coverage=1 00:29:52.880 --rc genhtml_legend=1 00:29:52.880 --rc geninfo_all_blocks=1 00:29:52.880 --rc geninfo_unexecuted_blocks=1 00:29:52.880 00:29:52.880 ' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:52.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:52.880 --rc genhtml_branch_coverage=1 00:29:52.880 --rc genhtml_function_coverage=1 00:29:52.880 --rc genhtml_legend=1 00:29:52.880 --rc geninfo_all_blocks=1 00:29:52.880 --rc geninfo_unexecuted_blocks=1 00:29:52.880 00:29:52.880 ' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:29:52.880 04:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:01.020 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:01.020 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:01.020 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:01.020 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.020 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:30:01.021 00:30:01.021 --- 10.0.0.2 ping statistics --- 00:30:01.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.021 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:30:01.021 00:30:01.021 --- 10.0.0.1 ping statistics --- 00:30:01.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.021 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=3189427 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 3189427 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 3189427 ']' 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:01.021 04:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:01.021 [2024-11-05 04:41:13.884058] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:01.021 [2024-11-05 04:41:13.885200] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:30:01.021 [2024-11-05 04:41:13.885254] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.021 [2024-11-05 04:41:13.984221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:01.021 [2024-11-05 04:41:14.035994] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.021 [2024-11-05 04:41:14.036044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.021 [2024-11-05 04:41:14.036053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.021 [2024-11-05 04:41:14.036060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.021 [2024-11-05 04:41:14.036066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.021 [2024-11-05 04:41:14.037958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.021 [2024-11-05 04:41:14.038261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.021 [2024-11-05 04:41:14.038262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.021 [2024-11-05 04:41:14.115479] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:01.021 [2024-11-05 04:41:14.115531] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:01.021 [2024-11-05 04:41:14.116084] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:01.021 [2024-11-05 04:41:14.116394] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:01.282 04:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:01.282 04:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:01.282 04:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.282 04:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:01.282 04:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:01.282 04:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.282 04:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:01.282 04:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:01.282 [2024-11-05 04:41:14.899330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.542 04:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:01.542 04:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.802 [2024-11-05 04:41:15.280048] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.802 04:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:02.063 04:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:02.063 Malloc0 00:30:02.063 04:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:02.324 Delay0 00:30:02.324 04:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:02.584 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:02.584 NULL1 00:30:02.584 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:02.844 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3190052 00:30:02.844 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:02.844 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:02.844 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.105 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.365 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:03.365 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:03.365 true 00:30:03.365 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:03.365 04:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:03.635 04:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:03.932 04:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:03.932 04:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:03.932 true 00:30:03.932 04:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:03.932 04:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.202 04:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.462 04:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:04.462 04:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:04.462 true 00:30:04.462 04:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:04.462 04:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:04.722 04:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:04.983 04:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:04.983 04:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:04.983 true 00:30:04.983 04:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:04.983 04:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.243 04:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:05.503 04:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:05.503 04:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:05.763 true 00:30:05.763 04:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:05.763 04:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:05.763 04:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.024 04:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:06.024 04:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:06.284 true 00:30:06.284 04:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:06.284 04:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:06.546 04:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:06.546 04:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:06.546 04:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:06.807 true 00:30:06.807 04:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:06.807 04:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.067 04:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.067 04:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:07.067 04:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:07.328 true 00:30:07.328 04:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:07.328 04:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:07.588 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:07.848 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:07.848 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:07.848 true 00:30:07.848 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:07.848 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.108 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.369 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:08.369 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:08.369 true 00:30:08.369 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:08.369 04:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:08.629 04:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:08.889 04:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:08.889 04:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:08.889 true 00:30:09.149 04:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:09.149 04:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.149 04:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.410 04:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:09.410 04:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:09.671 true 00:30:09.671 04:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:09.671 04:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:09.671 04:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:09.931 04:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:09.931 04:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:10.191 true 00:30:10.191 04:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:10.191 04:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.452 04:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.452 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:10.452 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:10.712 true 00:30:10.712 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:10.712 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:10.973 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:10.973 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:10.973 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:11.233 true 00:30:11.233 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:11.233 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:11.494 04:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:11.494 04:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:11.494 04:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:11.754 true 00:30:11.754 04:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:11.754 04:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.013 04:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.272 04:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:12.272 04:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:12.272 true 00:30:12.272 04:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:12.272 04:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:12.532 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:12.792 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:12.792 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:12.792 true 00:30:12.792 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:12.792 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.052 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.312 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:13.312 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:13.572 true 00:30:13.572 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:13.572 04:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:13.572 04:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:13.832 04:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:13.832 04:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:14.093 true 00:30:14.093 04:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:14.093 04:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.353 04:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.353 04:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:14.353 04:41:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:14.614 true 00:30:14.614 04:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:14.614 04:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:14.874 04:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.874 04:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:14.874 04:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:15.134 true 00:30:15.134 04:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:15.134 04:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.396 04:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:15.656 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:15.656 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:15.656 true 00:30:15.656 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:15.656 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:15.917 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.177 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:16.177 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:16.177 true 00:30:16.177 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:16.177 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.437 04:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.699 04:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:16.699 04:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:16.699 true 00:30:16.699 04:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:16.699 04:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.961 04:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.222 04:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:17.222 04:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:17.222 true 00:30:17.483 04:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:17.483 04:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.483 04:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.744 04:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:17.744 04:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:18.004 true 00:30:18.004 04:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:18.004 04:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.004 04:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.265 04:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:18.265 04:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:18.525 true 00:30:18.525 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:18.525 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.786 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.786 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:18.786 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:19.047 true 00:30:19.047 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:19.047 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.307 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.307 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:30:19.307 04:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:30:19.567 true 00:30:19.567 04:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:19.567 04:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:19.827 04:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.088 04:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:30:20.088 04:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:30:20.088 true 00:30:20.088 04:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:20.088 04:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.348 04:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.608 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:30:20.608 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:30:20.608 true 00:30:20.608 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:20.608 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.869 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.129 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:30:21.129 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:30:21.129 true 00:30:21.129 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:21.129 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.388 04:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.648 04:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:30:21.648 04:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:30:21.908 true 00:30:21.908 04:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:21.908 04:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.908 04:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.169 04:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:30:22.169 04:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:30:22.429 true 00:30:22.429 04:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:22.429 04:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.429 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.690 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:30:22.690 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:30:22.950 true 00:30:22.950 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:22.950 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.950 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.210 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:30:23.210 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:30:23.471 true 00:30:23.471 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:23.471 04:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.733 04:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.733 04:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:30:23.733 04:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:30:23.993 true 00:30:23.993 04:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:23.993 04:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.254 04:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.514 04:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:30:24.514 04:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:30:24.514 true 00:30:24.514 04:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:24.514 04:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.775 04:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.035 04:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:30:25.035 04:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:30:25.035 true 00:30:25.035 04:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:25.035 04:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.297 04:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.558 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:30:25.558 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:30:25.558 true 00:30:25.819 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:25.820 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.820 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.081 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:30:26.081 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:30:26.341 true 00:30:26.341 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:26.341 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.341 04:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.601 04:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:30:26.601 04:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:30:26.862 true 00:30:26.862 04:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:26.862 04:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.123 04:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.123 04:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:30:27.123 04:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:30:27.384 true 00:30:27.384 04:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:27.384 04:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.646 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.646 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:30:27.646 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:30:27.908 true 00:30:27.908 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:27.908 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.168 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.168 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:30:28.168 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:30:28.430 true 00:30:28.431 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:28.431 04:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:28.691 04:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:28.953 04:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:30:28.953 04:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:30:28.953 true 00:30:28.953 04:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:28.953 04:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.215 04:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.476 04:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:30:29.476 04:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:30:29.476 true 00:30:29.476 04:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:29.476 04:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.737 04:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.998 04:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:30:29.998 04:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:30:29.998 true 00:30:29.998 04:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:29.998 04:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.259 04:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.520 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:30:30.520 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:30:30.781 true 00:30:30.781 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:30.781 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.782 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.042 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:30:31.042 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:30:31.302 true 00:30:31.302 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:31.302 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.302 04:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:31.563 04:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:30:31.563 04:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:30:31.823 true 00:30:31.823 04:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:31.823 04:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.084 04:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.084 04:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:30:32.084 04:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:30:32.344 true 00:30:32.344 04:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:32.344 04:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.604 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.604 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:30:32.604 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:30:32.865 true 00:30:32.865 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:32.865 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.125 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.125 Initializing NVMe Controllers 00:30:33.125 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:33.125 Controller IO queue size 128, less than required. 00:30:33.125 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:33.125 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:33.125 Initialization complete. Launching workers. 00:30:33.125 ======================================================== 00:30:33.125 Latency(us) 00:30:33.125 Device Information : IOPS MiB/s Average min max 00:30:33.125 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30339.70 14.81 4218.78 1482.95 10803.46 00:30:33.125 ======================================================== 00:30:33.125 Total : 30339.70 14.81 4218.78 1482.95 10803.46 00:30:33.125 00:30:33.125 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:30:33.125 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:30:33.386 true 00:30:33.386 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3190052 00:30:33.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3190052) - No such process 00:30:33.386 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3190052 00:30:33.386 04:41:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.696 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:34.003 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:34.003 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:34.003 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:34.003 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.003 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:34.003 null0 00:30:34.003 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.003 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.003 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:34.264 null1 00:30:34.264 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.264 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.264 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:34.264 null2 00:30:34.264 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.264 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.264 04:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:34.525 null3 00:30:34.525 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.525 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.525 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:34.787 null4 00:30:34.787 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.787 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.787 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:34.787 null5 00:30:34.787 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:34.787 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:34.787 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:35.047 null6 00:30:35.047 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:35.047 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:35.047 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:35.310 null7 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3196312 3196313 3196315 3196318 3196319 3196321 3196323 3196325 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.310 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:35.572 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:35.572 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:35.572 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:35.572 04:41:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:35.572 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.573 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.573 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:35.573 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.573 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.573 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:35.573 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.573 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.573 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:35.835 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.098 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:36.359 04:41:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:36.621 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.622 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:36.884 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:36.885 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.146 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.408 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.409 04:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.409 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.671 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.672 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:37.934 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.196 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:38.458 04:41:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.458 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.458 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:38.458 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.720 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.981 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:38.981 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:38.981 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.981 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:38.981 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:38.981 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:38.981 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:38.981 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:39.243 rmmod nvme_tcp 00:30:39.243 rmmod nvme_fabrics 00:30:39.243 rmmod nvme_keyring 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 3189427 ']' 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 3189427 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 3189427 ']' 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 3189427 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3189427 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3189427' 00:30:39.243 killing process with pid 3189427 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 3189427 00:30:39.243 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 3189427 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:39.504 04:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:42.051 00:30:42.051 real 0m48.790s 00:30:42.051 user 3m3.095s 00:30:42.051 sys 0m22.469s 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:42.051 ************************************ 00:30:42.051 END TEST nvmf_ns_hotplug_stress 00:30:42.051 ************************************ 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:42.051 ************************************ 00:30:42.051 START TEST nvmf_delete_subsystem 00:30:42.051 ************************************ 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:42.051 * Looking for test storage... 00:30:42.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:42.051 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:42.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.052 --rc genhtml_branch_coverage=1 00:30:42.052 --rc genhtml_function_coverage=1 00:30:42.052 --rc genhtml_legend=1 00:30:42.052 --rc geninfo_all_blocks=1 00:30:42.052 --rc geninfo_unexecuted_blocks=1 00:30:42.052 00:30:42.052 ' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:42.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.052 --rc genhtml_branch_coverage=1 00:30:42.052 --rc genhtml_function_coverage=1 00:30:42.052 --rc genhtml_legend=1 00:30:42.052 --rc geninfo_all_blocks=1 00:30:42.052 --rc geninfo_unexecuted_blocks=1 00:30:42.052 00:30:42.052 ' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:42.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.052 --rc genhtml_branch_coverage=1 00:30:42.052 --rc genhtml_function_coverage=1 00:30:42.052 --rc genhtml_legend=1 00:30:42.052 --rc geninfo_all_blocks=1 00:30:42.052 --rc geninfo_unexecuted_blocks=1 00:30:42.052 00:30:42.052 ' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:42.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:42.052 --rc genhtml_branch_coverage=1 00:30:42.052 --rc genhtml_function_coverage=1 00:30:42.052 --rc genhtml_legend=1 00:30:42.052 --rc geninfo_all_blocks=1 00:30:42.052 --rc geninfo_unexecuted_blocks=1 00:30:42.052 00:30:42.052 ' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:42.052 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:42.053 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:42.053 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:42.053 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:42.053 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:42.053 04:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:48.763 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:48.763 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:48.763 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:48.764 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:48.764 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:48.764 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:48.764 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:48.764 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:49.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:49.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:30:49.026 00:30:49.026 --- 10.0.0.2 ping statistics --- 00:30:49.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.026 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:49.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:49.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:30:49.026 00:30:49.026 --- 10.0.0.1 ping statistics --- 00:30:49.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:49.026 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=3201581 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 3201581 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 3201581 ']' 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:49.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:49.026 04:42:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.026 [2024-11-05 04:42:02.555600] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:49.026 [2024-11-05 04:42:02.556564] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:30:49.026 [2024-11-05 04:42:02.556603] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:49.026 [2024-11-05 04:42:02.635137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:49.287 [2024-11-05 04:42:02.670075] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:49.287 [2024-11-05 04:42:02.670107] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:49.287 [2024-11-05 04:42:02.670115] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:49.287 [2024-11-05 04:42:02.670122] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:49.287 [2024-11-05 04:42:02.670128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:49.287 [2024-11-05 04:42:02.671342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:49.287 [2024-11-05 04:42:02.671343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.287 [2024-11-05 04:42:02.726012] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:49.288 [2024-11-05 04:42:02.726523] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:49.288 [2024-11-05 04:42:02.726889] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.858 [2024-11-05 04:42:03.387907] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.858 [2024-11-05 04:42:03.416273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.858 NULL1 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.858 Delay0 00:30:49.858 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.859 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:49.859 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:49.859 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:49.859 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:49.859 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3201633 00:30:49.859 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:49.859 04:42:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:50.119 [2024-11-05 04:42:03.513384] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:52.031 04:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:52.031 04:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:52.031 04:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Write completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 Read completed with error (sct=0, sc=8) 00:30:52.031 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 [2024-11-05 04:42:05.594411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f962c0 is same with the state(6) to be set 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 starting I/O failed: -6 00:30:52.032 [2024-11-05 04:42:05.595076] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7ffc00d450 is same with the state(6) to be set 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Read completed with error (sct=0, sc=8) 00:30:52.032 Write completed with error (sct=0, sc=8) 00:30:52.973 [2024-11-05 04:42:06.571323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f979a0 is same with the state(6) to be set 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 [2024-11-05 04:42:06.597479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7ffc00d780 is same with the state(6) to be set 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Read completed with error (sct=0, sc=8) 00:30:52.973 Write completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 [2024-11-05 04:42:06.597637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7ffc00cfe0 is same with the state(6) to be set 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 [2024-11-05 04:42:06.597926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f964a0 is same with the state(6) to be set 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Write completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 Read completed with error (sct=0, sc=8) 00:30:52.974 [2024-11-05 04:42:06.598238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f96860 is same with the state(6) to be set 00:30:52.974 Initializing NVMe Controllers 00:30:52.974 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:52.974 Controller IO queue size 128, less than required. 00:30:52.974 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:52.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:52.974 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:52.974 Initialization complete. Launching workers. 00:30:52.974 ======================================================== 00:30:52.974 Latency(us) 00:30:52.974 Device Information : IOPS MiB/s Average min max 00:30:52.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 189.11 0.09 892980.56 345.70 1008189.27 00:30:52.974 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 161.24 0.08 975949.84 299.02 2001649.74 00:30:52.974 ======================================================== 00:30:52.974 Total : 350.35 0.17 931165.28 299.02 2001649.74 00:30:52.974 00:30:52.974 [2024-11-05 04:42:06.598608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f979a0 (9): Bad file descriptor 00:30:52.974 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:30:52.974 04:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:52.974 04:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:30:52.974 04:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3201633 00:30:52.974 04:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3201633 00:30:53.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3201633) - No such process 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3201633 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 3201633 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 3201633 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:30:53.545 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:53.546 [2024-11-05 04:42:07.132398] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3202356 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3202356 00:30:53.546 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:53.806 [2024-11-05 04:42:07.204918] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:30:54.067 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:54.067 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3202356 00:30:54.067 04:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:54.638 04:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:54.638 04:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3202356 00:30:54.638 04:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:55.209 04:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:55.209 04:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3202356 00:30:55.209 04:42:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:55.781 04:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:55.781 04:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3202356 00:30:55.781 04:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:56.042 04:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:56.042 04:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3202356 00:30:56.042 04:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:56.612 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:56.612 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3202356 00:30:56.612 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:30:56.872 Initializing NVMe Controllers 00:30:56.872 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:56.872 Controller IO queue size 128, less than required. 00:30:56.872 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:56.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:30:56.872 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:30:56.872 Initialization complete. Launching workers. 00:30:56.872 ======================================================== 00:30:56.872 Latency(us) 00:30:56.872 Device Information : IOPS MiB/s Average min max 00:30:56.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002149.27 1000188.34 1007407.91 00:30:56.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002981.45 1000258.16 1009524.25 00:30:56.873 ======================================================== 00:30:56.873 Total : 256.00 0.12 1002565.36 1000188.34 1009524.25 00:30:56.873 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3202356 00:30:57.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3202356) - No such process 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3202356 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:57.132 rmmod nvme_tcp 00:30:57.132 rmmod nvme_fabrics 00:30:57.132 rmmod nvme_keyring 00:30:57.132 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 3201581 ']' 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 3201581 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 3201581 ']' 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 3201581 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3201581 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3201581' 00:30:57.393 killing process with pid 3201581 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 3201581 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 3201581 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.393 04:42:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:59.939 00:30:59.939 real 0m17.924s 00:30:59.939 user 0m26.142s 00:30:59.939 sys 0m7.199s 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.939 ************************************ 00:30:59.939 END TEST nvmf_delete_subsystem 00:30:59.939 ************************************ 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:59.939 ************************************ 00:30:59.939 START TEST nvmf_host_management 00:30:59.939 ************************************ 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:30:59.939 * Looking for test storage... 00:30:59.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.939 --rc genhtml_branch_coverage=1 00:30:59.939 --rc genhtml_function_coverage=1 00:30:59.939 --rc genhtml_legend=1 00:30:59.939 --rc geninfo_all_blocks=1 00:30:59.939 --rc geninfo_unexecuted_blocks=1 00:30:59.939 00:30:59.939 ' 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:59.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.939 --rc genhtml_branch_coverage=1 00:30:59.939 --rc genhtml_function_coverage=1 00:30:59.939 --rc genhtml_legend=1 00:30:59.939 --rc geninfo_all_blocks=1 00:30:59.939 --rc geninfo_unexecuted_blocks=1 00:30:59.939 00:30:59.939 ' 00:30:59.939 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.940 --rc genhtml_branch_coverage=1 00:30:59.940 --rc genhtml_function_coverage=1 00:30:59.940 --rc genhtml_legend=1 00:30:59.940 --rc geninfo_all_blocks=1 00:30:59.940 --rc geninfo_unexecuted_blocks=1 00:30:59.940 00:30:59.940 ' 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:59.940 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.940 --rc genhtml_branch_coverage=1 00:30:59.940 --rc genhtml_function_coverage=1 00:30:59.940 --rc genhtml_legend=1 00:30:59.940 --rc geninfo_all_blocks=1 00:30:59.940 --rc geninfo_unexecuted_blocks=1 00:30:59.940 00:30:59.940 ' 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:30:59.940 04:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.087 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:08.087 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:08.088 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:08.088 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:08.088 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:08.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.666 ms 00:31:08.088 00:31:08.088 --- 10.0.0.2 ping statistics --- 00:31:08.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.088 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:31:08.088 00:31:08.088 --- 10.0.0.1 ping statistics --- 00:31:08.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.088 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=3207740 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 3207740 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3207740 ']' 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:08.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:08.088 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:08.089 04:42:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.089 [2024-11-05 04:42:20.663315] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:08.089 [2024-11-05 04:42:20.664445] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:31:08.089 [2024-11-05 04:42:20.664499] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:08.089 [2024-11-05 04:42:20.765294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:08.089 [2024-11-05 04:42:20.818321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:08.089 [2024-11-05 04:42:20.818376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:08.089 [2024-11-05 04:42:20.818386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:08.089 [2024-11-05 04:42:20.818393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:08.089 [2024-11-05 04:42:20.818399] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:08.089 [2024-11-05 04:42:20.820358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:08.089 [2024-11-05 04:42:20.820527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:08.089 [2024-11-05 04:42:20.820693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:08.089 [2024-11-05 04:42:20.820694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:08.089 [2024-11-05 04:42:20.895581] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:08.089 [2024-11-05 04:42:20.896283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:08.089 [2024-11-05 04:42:20.896913] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:08.089 [2024-11-05 04:42:20.897272] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:08.089 [2024-11-05 04:42:20.897377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.089 [2024-11-05 04:42:21.517552] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.089 Malloc0 00:31:08.089 [2024-11-05 04:42:21.613837] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3207917 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3207917 /var/tmp/bdevperf.sock 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 3207917 ']' 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:08.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:08.089 { 00:31:08.089 "params": { 00:31:08.089 "name": "Nvme$subsystem", 00:31:08.089 "trtype": "$TEST_TRANSPORT", 00:31:08.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.089 "adrfam": "ipv4", 00:31:08.089 "trsvcid": "$NVMF_PORT", 00:31:08.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.089 "hdgst": ${hdgst:-false}, 00:31:08.089 "ddgst": ${ddgst:-false} 00:31:08.089 }, 00:31:08.089 "method": "bdev_nvme_attach_controller" 00:31:08.089 } 00:31:08.089 EOF 00:31:08.089 )") 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:08.089 04:42:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:08.089 "params": { 00:31:08.089 "name": "Nvme0", 00:31:08.089 "trtype": "tcp", 00:31:08.089 "traddr": "10.0.0.2", 00:31:08.089 "adrfam": "ipv4", 00:31:08.089 "trsvcid": "4420", 00:31:08.089 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.089 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.089 "hdgst": false, 00:31:08.089 "ddgst": false 00:31:08.089 }, 00:31:08.089 "method": "bdev_nvme_attach_controller" 00:31:08.089 }' 00:31:08.089 [2024-11-05 04:42:21.722905] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:31:08.089 [2024-11-05 04:42:21.722963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3207917 ] 00:31:08.351 [2024-11-05 04:42:21.794051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.351 [2024-11-05 04:42:21.830264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:08.611 Running I/O for 10 seconds... 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=643 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 643 -ge 100 ']' 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:09.183 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:09.184 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:09.184 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.184 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:09.184 [2024-11-05 04:42:22.607968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608097] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608358] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1f2a0 is same with the state(6) to be set 00:31:09.184 [2024-11-05 04:42:22.608895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.184 [2024-11-05 04:42:22.608932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.184 [2024-11-05 04:42:22.608951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.184 [2024-11-05 04:42:22.608960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.184 [2024-11-05 04:42:22.608975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:90368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.184 [2024-11-05 04:42:22.608982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.184 [2024-11-05 04:42:22.608992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:90496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.184 [2024-11-05 04:42:22.608999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.184 [2024-11-05 04:42:22.609009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:90624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.184 [2024-11-05 04:42:22.609016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.184 [2024-11-05 04:42:22.609026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:90752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.184 [2024-11-05 04:42:22.609033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.184 [2024-11-05 04:42:22.609042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.184 [2024-11-05 04:42:22.609050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.184 [2024-11-05 04:42:22.609059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.184 [2024-11-05 04:42:22.609066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.184 [2024-11-05 04:42:22.609076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.184 [2024-11-05 04:42:22.609083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:91392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:91520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:91776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:91904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:92032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:93952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:94208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:94336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:94848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:95232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:95360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:95744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:95872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.185 [2024-11-05 04:42:22.609743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.185 [2024-11-05 04:42:22.609758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.609992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.609999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.610009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:09.186 [2024-11-05 04:42:22.610016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.610025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18862b0 is same with the state(6) to be set 00:31:09.186 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.186 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:09.186 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:09.186 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:09.186 [2024-11-05 04:42:22.611298] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:09.186 task offset: 90112 on job bdev=Nvme0n1 fails 00:31:09.186 00:31:09.186 Latency(us) 00:31:09.186 [2024-11-05T03:42:22.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:09.186 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:09.186 Job: Nvme0n1 ended in about 0.51 seconds with error 00:31:09.186 Verification LBA range: start 0x0 length 0x400 00:31:09.186 Nvme0n1 : 0.51 1368.06 85.50 124.37 0.00 41804.18 4478.29 36263.25 00:31:09.186 [2024-11-05T03:42:22.826Z] =================================================================================================================== 00:31:09.186 [2024-11-05T03:42:22.826Z] Total : 1368.06 85.50 124.37 0.00 41804.18 4478.29 36263.25 00:31:09.186 [2024-11-05 04:42:22.613319] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:09.186 [2024-11-05 04:42:22.613344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166d000 (9): Bad file descriptor 00:31:09.186 [2024-11-05 04:42:22.614419] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:31:09.186 [2024-11-05 04:42:22.614532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:31:09.186 [2024-11-05 04:42:22.614563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:09.186 [2024-11-05 04:42:22.614577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:31:09.186 [2024-11-05 04:42:22.614586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:31:09.186 [2024-11-05 04:42:22.614593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:31:09.186 [2024-11-05 04:42:22.614600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x166d000 00:31:09.186 [2024-11-05 04:42:22.614622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x166d000 (9): Bad file descriptor 00:31:09.186 [2024-11-05 04:42:22.614636] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:09.186 [2024-11-05 04:42:22.614644] nvme_ctrlr.c:1799:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:09.186 [2024-11-05 04:42:22.614653] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:09.186 [2024-11-05 04:42:22.614667] bdev_nvme.c:2234:_bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:09.186 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:09.186 04:42:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3207917 00:31:10.127 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3207917) - No such process 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:10.127 { 00:31:10.127 "params": { 00:31:10.127 "name": "Nvme$subsystem", 00:31:10.127 "trtype": "$TEST_TRANSPORT", 00:31:10.127 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.127 "adrfam": "ipv4", 00:31:10.127 "trsvcid": "$NVMF_PORT", 00:31:10.127 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.127 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.127 "hdgst": ${hdgst:-false}, 00:31:10.127 "ddgst": ${ddgst:-false} 00:31:10.127 }, 00:31:10.127 "method": "bdev_nvme_attach_controller" 00:31:10.127 } 00:31:10.127 EOF 00:31:10.127 )") 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:10.127 04:42:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:10.127 "params": { 00:31:10.127 "name": "Nvme0", 00:31:10.127 "trtype": "tcp", 00:31:10.127 "traddr": "10.0.0.2", 00:31:10.127 "adrfam": "ipv4", 00:31:10.127 "trsvcid": "4420", 00:31:10.127 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.127 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.127 "hdgst": false, 00:31:10.128 "ddgst": false 00:31:10.128 }, 00:31:10.128 "method": "bdev_nvme_attach_controller" 00:31:10.128 }' 00:31:10.128 [2024-11-05 04:42:23.682251] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:31:10.128 [2024-11-05 04:42:23.682308] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3208337 ] 00:31:10.128 [2024-11-05 04:42:23.752585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.388 [2024-11-05 04:42:23.787689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:10.648 Running I/O for 1 seconds... 00:31:11.590 1536.00 IOPS, 96.00 MiB/s 00:31:11.590 Latency(us) 00:31:11.590 [2024-11-05T03:42:25.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.590 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:11.590 Verification LBA range: start 0x0 length 0x400 00:31:11.590 Nvme0n1 : 1.06 1502.78 93.92 0.00 0.00 40310.77 7154.35 52428.80 00:31:11.590 [2024-11-05T03:42:25.230Z] =================================================================================================================== 00:31:11.590 [2024-11-05T03:42:25.230Z] Total : 1502.78 93.92 0.00 0.00 40310.77 7154.35 52428.80 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:11.851 rmmod nvme_tcp 00:31:11.851 rmmod nvme_fabrics 00:31:11.851 rmmod nvme_keyring 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 3207740 ']' 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 3207740 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 3207740 ']' 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 3207740 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3207740 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3207740' 00:31:11.851 killing process with pid 3207740 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 3207740 00:31:11.851 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 3207740 00:31:12.111 [2024-11-05 04:42:25.546581] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:12.111 04:42:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.023 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:14.023 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:14.023 00:31:14.023 real 0m14.517s 00:31:14.023 user 0m19.716s 00:31:14.023 sys 0m7.376s 00:31:14.023 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:14.023 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:14.023 ************************************ 00:31:14.023 END TEST nvmf_host_management 00:31:14.023 ************************************ 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.284 ************************************ 00:31:14.284 START TEST nvmf_lvol 00:31:14.284 ************************************ 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:14.284 * Looking for test storage... 00:31:14.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.284 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:14.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.284 --rc genhtml_branch_coverage=1 00:31:14.284 --rc genhtml_function_coverage=1 00:31:14.284 --rc genhtml_legend=1 00:31:14.284 --rc geninfo_all_blocks=1 00:31:14.284 --rc geninfo_unexecuted_blocks=1 00:31:14.284 00:31:14.285 ' 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:14.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.285 --rc genhtml_branch_coverage=1 00:31:14.285 --rc genhtml_function_coverage=1 00:31:14.285 --rc genhtml_legend=1 00:31:14.285 --rc geninfo_all_blocks=1 00:31:14.285 --rc geninfo_unexecuted_blocks=1 00:31:14.285 00:31:14.285 ' 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:14.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.285 --rc genhtml_branch_coverage=1 00:31:14.285 --rc genhtml_function_coverage=1 00:31:14.285 --rc genhtml_legend=1 00:31:14.285 --rc geninfo_all_blocks=1 00:31:14.285 --rc geninfo_unexecuted_blocks=1 00:31:14.285 00:31:14.285 ' 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:14.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.285 --rc genhtml_branch_coverage=1 00:31:14.285 --rc genhtml_function_coverage=1 00:31:14.285 --rc genhtml_legend=1 00:31:14.285 --rc geninfo_all_blocks=1 00:31:14.285 --rc geninfo_unexecuted_blocks=1 00:31:14.285 00:31:14.285 ' 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.285 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:14.546 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:14.547 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.547 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:14.547 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.547 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:14.547 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:14.547 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:14.547 04:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:22.687 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:22.687 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:22.687 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:22.688 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:22.688 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:22.688 04:42:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:22.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:22.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:31:22.688 00:31:22.688 --- 10.0.0.2 ping statistics --- 00:31:22.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.688 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:22.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:22.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:31:22.688 00:31:22.688 --- 10.0.0.1 ping statistics --- 00:31:22.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:22.688 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=3212799 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 3212799 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 3212799 ']' 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:22.688 04:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:22.688 [2024-11-05 04:42:35.225432] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:22.688 [2024-11-05 04:42:35.226577] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:31:22.688 [2024-11-05 04:42:35.226633] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:22.688 [2024-11-05 04:42:35.309844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:22.688 [2024-11-05 04:42:35.350800] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:22.688 [2024-11-05 04:42:35.350838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:22.688 [2024-11-05 04:42:35.350846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:22.688 [2024-11-05 04:42:35.350853] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:22.688 [2024-11-05 04:42:35.350859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:22.689 [2024-11-05 04:42:35.352244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.689 [2024-11-05 04:42:35.352362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:22.689 [2024-11-05 04:42:35.352364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.689 [2024-11-05 04:42:35.408375] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:22.689 [2024-11-05 04:42:35.408932] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:22.689 [2024-11-05 04:42:35.409218] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:22.689 [2024-11-05 04:42:35.409482] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:22.689 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:22.689 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:22.689 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:22.689 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:22.689 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:22.689 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:22.689 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:22.689 [2024-11-05 04:42:36.229274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:22.689 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:22.950 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:22.950 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:23.210 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:23.210 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:23.471 04:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:23.471 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e84f9d71-a748-4738-b8ba-ca20cfc5c811 00:31:23.471 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e84f9d71-a748-4738-b8ba-ca20cfc5c811 lvol 20 00:31:23.732 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=ae36759b-8e72-45cc-8292-b05de32a78c3 00:31:23.732 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:23.993 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ae36759b-8e72-45cc-8292-b05de32a78c3 00:31:23.993 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.253 [2024-11-05 04:42:37.689049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.253 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:24.253 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:24.253 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3213257 00:31:24.253 04:42:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:25.636 04:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot ae36759b-8e72-45cc-8292-b05de32a78c3 MY_SNAPSHOT 00:31:25.636 04:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e23032bc-5278-4dcd-8b3a-0062614fde4a 00:31:25.636 04:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize ae36759b-8e72-45cc-8292-b05de32a78c3 30 00:31:25.897 04:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e23032bc-5278-4dcd-8b3a-0062614fde4a MY_CLONE 00:31:26.165 04:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f8dd7990-8f1b-4de9-9f5f-33356b72d0a0 00:31:26.165 04:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f8dd7990-8f1b-4de9-9f5f-33356b72d0a0 00:31:26.432 04:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3213257 00:31:36.432 Initializing NVMe Controllers 00:31:36.432 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:36.432 Controller IO queue size 128, less than required. 00:31:36.432 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:36.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:36.432 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:36.432 Initialization complete. Launching workers. 00:31:36.432 ======================================================== 00:31:36.432 Latency(us) 00:31:36.432 Device Information : IOPS MiB/s Average min max 00:31:36.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12348.20 48.24 10370.02 1849.96 78175.57 00:31:36.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15344.00 59.94 8342.31 504.68 57790.26 00:31:36.432 ======================================================== 00:31:36.432 Total : 27692.19 108.17 9246.49 504.68 78175.57 00:31:36.432 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ae36759b-8e72-45cc-8292-b05de32a78c3 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e84f9d71-a748-4738-b8ba-ca20cfc5c811 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:36.432 rmmod nvme_tcp 00:31:36.432 rmmod nvme_fabrics 00:31:36.432 rmmod nvme_keyring 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 3212799 ']' 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 3212799 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 3212799 ']' 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 3212799 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3212799 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3212799' 00:31:36.432 killing process with pid 3212799 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 3212799 00:31:36.432 04:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 3212799 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.433 04:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.821 00:31:37.821 real 0m23.492s 00:31:37.821 user 0m55.538s 00:31:37.821 sys 0m10.557s 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:37.821 ************************************ 00:31:37.821 END TEST nvmf_lvol 00:31:37.821 ************************************ 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:37.821 ************************************ 00:31:37.821 START TEST nvmf_lvs_grow 00:31:37.821 ************************************ 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:37.821 * Looking for test storage... 00:31:37.821 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.821 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:38.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.083 --rc genhtml_branch_coverage=1 00:31:38.083 --rc genhtml_function_coverage=1 00:31:38.083 --rc genhtml_legend=1 00:31:38.083 --rc geninfo_all_blocks=1 00:31:38.083 --rc geninfo_unexecuted_blocks=1 00:31:38.083 00:31:38.083 ' 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:38.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.083 --rc genhtml_branch_coverage=1 00:31:38.083 --rc genhtml_function_coverage=1 00:31:38.083 --rc genhtml_legend=1 00:31:38.083 --rc geninfo_all_blocks=1 00:31:38.083 --rc geninfo_unexecuted_blocks=1 00:31:38.083 00:31:38.083 ' 00:31:38.083 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:38.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.083 --rc genhtml_branch_coverage=1 00:31:38.083 --rc genhtml_function_coverage=1 00:31:38.083 --rc genhtml_legend=1 00:31:38.083 --rc geninfo_all_blocks=1 00:31:38.083 --rc geninfo_unexecuted_blocks=1 00:31:38.084 00:31:38.084 ' 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:38.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.084 --rc genhtml_branch_coverage=1 00:31:38.084 --rc genhtml_function_coverage=1 00:31:38.084 --rc genhtml_legend=1 00:31:38.084 --rc geninfo_all_blocks=1 00:31:38.084 --rc geninfo_unexecuted_blocks=1 00:31:38.084 00:31:38.084 ' 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:38.084 04:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:46.232 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:46.232 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:46.232 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:46.232 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:46.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.645 ms 00:31:46.232 00:31:46.232 --- 10.0.0.2 ping statistics --- 00:31:46.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.232 rtt min/avg/max/mdev = 0.645/0.645/0.645/0.000 ms 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.295 ms 00:31:46.232 00:31:46.232 --- 10.0.0.1 ping statistics --- 00:31:46.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.232 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=3219515 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 3219515 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 3219515 ']' 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:46.232 04:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.232 [2024-11-05 04:42:58.887963] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:46.232 [2024-11-05 04:42:58.889131] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:31:46.232 [2024-11-05 04:42:58.889184] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.232 [2024-11-05 04:42:58.970960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.232 [2024-11-05 04:42:59.011177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.232 [2024-11-05 04:42:59.011213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.232 [2024-11-05 04:42:59.011221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.232 [2024-11-05 04:42:59.011228] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.232 [2024-11-05 04:42:59.011234] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.232 [2024-11-05 04:42:59.011837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:46.232 [2024-11-05 04:42:59.067450] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:46.232 [2024-11-05 04:42:59.067708] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:46.232 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:46.232 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:31:46.232 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:46.232 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:46.232 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.232 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:46.232 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:46.493 [2024-11-05 04:42:59.892632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:46.493 ************************************ 00:31:46.493 START TEST lvs_grow_clean 00:31:46.493 ************************************ 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:46.493 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:46.494 04:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:46.755 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:46.755 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:46.755 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:31:46.755 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:31:46.755 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:47.015 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:47.015 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:47.015 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb lvol 150 00:31:47.275 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d406988d-bfe2-45b3-b366-fbd245735dbb 00:31:47.275 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:47.275 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:47.275 [2024-11-05 04:43:00.856281] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:47.275 [2024-11-05 04:43:00.856438] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:47.275 true 00:31:47.275 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:47.275 04:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:31:47.536 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:47.536 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:47.796 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d406988d-bfe2-45b3-b366-fbd245735dbb 00:31:47.796 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:48.057 [2024-11-05 04:43:01.504875] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3220203 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3220203 /var/tmp/bdevperf.sock 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 3220203 ']' 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:48.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:48.058 04:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:48.318 [2024-11-05 04:43:01.714813] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:31:48.318 [2024-11-05 04:43:01.714869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3220203 ] 00:31:48.318 [2024-11-05 04:43:01.801935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.318 [2024-11-05 04:43:01.838485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.888 04:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:48.888 04:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:31:48.888 04:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:49.149 Nvme0n1 00:31:49.149 04:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:49.409 [ 00:31:49.409 { 00:31:49.409 "name": "Nvme0n1", 00:31:49.409 "aliases": [ 00:31:49.409 "d406988d-bfe2-45b3-b366-fbd245735dbb" 00:31:49.409 ], 00:31:49.409 "product_name": "NVMe disk", 00:31:49.409 "block_size": 4096, 00:31:49.409 "num_blocks": 38912, 00:31:49.409 "uuid": "d406988d-bfe2-45b3-b366-fbd245735dbb", 00:31:49.409 "numa_id": 0, 00:31:49.409 "assigned_rate_limits": { 00:31:49.409 "rw_ios_per_sec": 0, 00:31:49.409 "rw_mbytes_per_sec": 0, 00:31:49.409 "r_mbytes_per_sec": 0, 00:31:49.409 "w_mbytes_per_sec": 0 00:31:49.409 }, 00:31:49.409 "claimed": false, 00:31:49.409 "zoned": false, 00:31:49.409 "supported_io_types": { 00:31:49.409 "read": true, 00:31:49.409 "write": true, 00:31:49.409 "unmap": true, 00:31:49.409 "flush": true, 00:31:49.409 "reset": true, 00:31:49.409 "nvme_admin": true, 00:31:49.409 "nvme_io": true, 00:31:49.409 "nvme_io_md": false, 00:31:49.409 "write_zeroes": true, 00:31:49.409 "zcopy": false, 00:31:49.409 "get_zone_info": false, 00:31:49.409 "zone_management": false, 00:31:49.409 "zone_append": false, 00:31:49.409 "compare": true, 00:31:49.409 "compare_and_write": true, 00:31:49.409 "abort": true, 00:31:49.409 "seek_hole": false, 00:31:49.409 "seek_data": false, 00:31:49.409 "copy": true, 00:31:49.409 "nvme_iov_md": false 00:31:49.409 }, 00:31:49.409 "memory_domains": [ 00:31:49.409 { 00:31:49.409 "dma_device_id": "system", 00:31:49.409 "dma_device_type": 1 00:31:49.409 } 00:31:49.409 ], 00:31:49.409 "driver_specific": { 00:31:49.409 "nvme": [ 00:31:49.409 { 00:31:49.409 "trid": { 00:31:49.409 "trtype": "TCP", 00:31:49.409 "adrfam": "IPv4", 00:31:49.409 "traddr": "10.0.0.2", 00:31:49.409 "trsvcid": "4420", 00:31:49.409 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:49.409 }, 00:31:49.409 "ctrlr_data": { 00:31:49.409 "cntlid": 1, 00:31:49.409 "vendor_id": "0x8086", 00:31:49.409 "model_number": "SPDK bdev Controller", 00:31:49.409 "serial_number": "SPDK0", 00:31:49.409 "firmware_revision": "25.01", 00:31:49.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:49.409 "oacs": { 00:31:49.409 "security": 0, 00:31:49.409 "format": 0, 00:31:49.409 "firmware": 0, 00:31:49.409 "ns_manage": 0 00:31:49.409 }, 00:31:49.409 "multi_ctrlr": true, 00:31:49.409 "ana_reporting": false 00:31:49.409 }, 00:31:49.409 "vs": { 00:31:49.409 "nvme_version": "1.3" 00:31:49.409 }, 00:31:49.409 "ns_data": { 00:31:49.409 "id": 1, 00:31:49.409 "can_share": true 00:31:49.409 } 00:31:49.409 } 00:31:49.409 ], 00:31:49.409 "mp_policy": "active_passive" 00:31:49.409 } 00:31:49.409 } 00:31:49.409 ] 00:31:49.409 04:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3220315 00:31:49.409 04:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:49.409 04:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:49.409 Running I/O for 10 seconds... 00:31:50.791 Latency(us) 00:31:50.791 [2024-11-05T03:43:04.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:50.791 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.791 Nvme0n1 : 1.00 17838.00 69.68 0.00 0.00 0.00 0.00 0.00 00:31:50.791 [2024-11-05T03:43:04.431Z] =================================================================================================================== 00:31:50.791 [2024-11-05T03:43:04.431Z] Total : 17838.00 69.68 0.00 0.00 0.00 0.00 0.00 00:31:50.791 00:31:51.362 04:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:31:51.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.622 Nvme0n1 : 2.00 17879.00 69.84 0.00 0.00 0.00 0.00 0.00 00:31:51.622 [2024-11-05T03:43:05.262Z] =================================================================================================================== 00:31:51.622 [2024-11-05T03:43:05.262Z] Total : 17879.00 69.84 0.00 0.00 0.00 0.00 0.00 00:31:51.622 00:31:51.622 true 00:31:51.622 04:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:31:51.622 04:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:51.882 04:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:51.882 04:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:51.882 04:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3220315 00:31:52.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.453 Nvme0n1 : 3.00 17913.67 69.98 0.00 0.00 0.00 0.00 0.00 00:31:52.453 [2024-11-05T03:43:06.093Z] =================================================================================================================== 00:31:52.453 [2024-11-05T03:43:06.093Z] Total : 17913.67 69.98 0.00 0.00 0.00 0.00 0.00 00:31:52.453 00:31:53.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.394 Nvme0n1 : 4.00 17935.75 70.06 0.00 0.00 0.00 0.00 0.00 00:31:53.394 [2024-11-05T03:43:07.034Z] =================================================================================================================== 00:31:53.394 [2024-11-05T03:43:07.034Z] Total : 17935.75 70.06 0.00 0.00 0.00 0.00 0.00 00:31:53.394 00:31:54.780 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.780 Nvme0n1 : 5.00 17954.80 70.14 0.00 0.00 0.00 0.00 0.00 00:31:54.780 [2024-11-05T03:43:08.420Z] =================================================================================================================== 00:31:54.780 [2024-11-05T03:43:08.420Z] Total : 17954.80 70.14 0.00 0.00 0.00 0.00 0.00 00:31:54.780 00:31:55.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:55.723 Nvme0n1 : 6.00 17970.33 70.20 0.00 0.00 0.00 0.00 0.00 00:31:55.723 [2024-11-05T03:43:09.363Z] =================================================================================================================== 00:31:55.723 [2024-11-05T03:43:09.363Z] Total : 17970.33 70.20 0.00 0.00 0.00 0.00 0.00 00:31:55.723 00:31:56.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:56.665 Nvme0n1 : 7.00 17990.43 70.28 0.00 0.00 0.00 0.00 0.00 00:31:56.665 [2024-11-05T03:43:10.305Z] =================================================================================================================== 00:31:56.665 [2024-11-05T03:43:10.305Z] Total : 17990.43 70.28 0.00 0.00 0.00 0.00 0.00 00:31:56.665 00:31:57.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:57.608 Nvme0n1 : 8.00 17997.62 70.30 0.00 0.00 0.00 0.00 0.00 00:31:57.608 [2024-11-05T03:43:11.248Z] =================================================================================================================== 00:31:57.608 [2024-11-05T03:43:11.248Z] Total : 17997.62 70.30 0.00 0.00 0.00 0.00 0.00 00:31:57.608 00:31:58.550 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:58.550 Nvme0n1 : 9.00 18003.22 70.33 0.00 0.00 0.00 0.00 0.00 00:31:58.550 [2024-11-05T03:43:12.190Z] =================================================================================================================== 00:31:58.550 [2024-11-05T03:43:12.190Z] Total : 18003.22 70.33 0.00 0.00 0.00 0.00 0.00 00:31:58.550 00:31:59.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.493 Nvme0n1 : 10.00 18014.20 70.37 0.00 0.00 0.00 0.00 0.00 00:31:59.493 [2024-11-05T03:43:13.133Z] =================================================================================================================== 00:31:59.493 [2024-11-05T03:43:13.133Z] Total : 18014.20 70.37 0.00 0.00 0.00 0.00 0.00 00:31:59.493 00:31:59.493 00:31:59.493 Latency(us) 00:31:59.493 [2024-11-05T03:43:13.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:59.493 Nvme0n1 : 10.00 18013.46 70.37 0.00 0.00 7100.95 2143.57 12834.13 00:31:59.493 [2024-11-05T03:43:13.133Z] =================================================================================================================== 00:31:59.493 [2024-11-05T03:43:13.133Z] Total : 18013.46 70.37 0.00 0.00 7100.95 2143.57 12834.13 00:31:59.493 { 00:31:59.493 "results": [ 00:31:59.493 { 00:31:59.493 "job": "Nvme0n1", 00:31:59.493 "core_mask": "0x2", 00:31:59.493 "workload": "randwrite", 00:31:59.493 "status": "finished", 00:31:59.493 "queue_depth": 128, 00:31:59.493 "io_size": 4096, 00:31:59.493 "runtime": 10.003909, 00:31:59.493 "iops": 18013.458539057083, 00:31:59.493 "mibps": 70.36507241819173, 00:31:59.493 "io_failed": 0, 00:31:59.493 "io_timeout": 0, 00:31:59.493 "avg_latency_us": 7100.946908243391, 00:31:59.493 "min_latency_us": 2143.5733333333333, 00:31:59.493 "max_latency_us": 12834.133333333333 00:31:59.493 } 00:31:59.493 ], 00:31:59.493 "core_count": 1 00:31:59.493 } 00:31:59.493 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3220203 00:31:59.493 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 3220203 ']' 00:31:59.493 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 3220203 00:31:59.493 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:31:59.493 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:59.493 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3220203 00:31:59.755 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:59.755 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:59.755 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3220203' 00:31:59.755 killing process with pid 3220203 00:31:59.755 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 3220203 00:31:59.755 Received shutdown signal, test time was about 10.000000 seconds 00:31:59.755 00:31:59.755 Latency(us) 00:31:59.755 [2024-11-05T03:43:13.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.755 [2024-11-05T03:43:13.395Z] =================================================================================================================== 00:31:59.755 [2024-11-05T03:43:13.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:59.755 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 3220203 00:31:59.755 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:00.016 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:00.016 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:32:00.016 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:00.277 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:00.277 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:00.277 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:00.538 [2024-11-05 04:43:13.932212] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:00.538 04:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:32:00.538 request: 00:32:00.538 { 00:32:00.538 "uuid": "49dc5de9-0ab7-487a-a6ff-326b6c6a61eb", 00:32:00.538 "method": "bdev_lvol_get_lvstores", 00:32:00.538 "req_id": 1 00:32:00.538 } 00:32:00.538 Got JSON-RPC error response 00:32:00.538 response: 00:32:00.538 { 00:32:00.538 "code": -19, 00:32:00.538 "message": "No such device" 00:32:00.538 } 00:32:00.538 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:32:00.538 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:00.538 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:00.538 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:00.538 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:00.799 aio_bdev 00:32:00.799 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d406988d-bfe2-45b3-b366-fbd245735dbb 00:32:00.799 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=d406988d-bfe2-45b3-b366-fbd245735dbb 00:32:00.799 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:00.799 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:32:00.799 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:00.799 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:00.799 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:01.060 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d406988d-bfe2-45b3-b366-fbd245735dbb -t 2000 00:32:01.060 [ 00:32:01.060 { 00:32:01.060 "name": "d406988d-bfe2-45b3-b366-fbd245735dbb", 00:32:01.060 "aliases": [ 00:32:01.060 "lvs/lvol" 00:32:01.060 ], 00:32:01.060 "product_name": "Logical Volume", 00:32:01.060 "block_size": 4096, 00:32:01.061 "num_blocks": 38912, 00:32:01.061 "uuid": "d406988d-bfe2-45b3-b366-fbd245735dbb", 00:32:01.061 "assigned_rate_limits": { 00:32:01.061 "rw_ios_per_sec": 0, 00:32:01.061 "rw_mbytes_per_sec": 0, 00:32:01.061 "r_mbytes_per_sec": 0, 00:32:01.061 "w_mbytes_per_sec": 0 00:32:01.061 }, 00:32:01.061 "claimed": false, 00:32:01.061 "zoned": false, 00:32:01.061 "supported_io_types": { 00:32:01.061 "read": true, 00:32:01.061 "write": true, 00:32:01.061 "unmap": true, 00:32:01.061 "flush": false, 00:32:01.061 "reset": true, 00:32:01.061 "nvme_admin": false, 00:32:01.061 "nvme_io": false, 00:32:01.061 "nvme_io_md": false, 00:32:01.061 "write_zeroes": true, 00:32:01.061 "zcopy": false, 00:32:01.061 "get_zone_info": false, 00:32:01.061 "zone_management": false, 00:32:01.061 "zone_append": false, 00:32:01.061 "compare": false, 00:32:01.061 "compare_and_write": false, 00:32:01.061 "abort": false, 00:32:01.061 "seek_hole": true, 00:32:01.061 "seek_data": true, 00:32:01.061 "copy": false, 00:32:01.061 "nvme_iov_md": false 00:32:01.061 }, 00:32:01.061 "driver_specific": { 00:32:01.061 "lvol": { 00:32:01.061 "lvol_store_uuid": "49dc5de9-0ab7-487a-a6ff-326b6c6a61eb", 00:32:01.061 "base_bdev": "aio_bdev", 00:32:01.061 "thin_provision": false, 00:32:01.061 "num_allocated_clusters": 38, 00:32:01.061 "snapshot": false, 00:32:01.061 "clone": false, 00:32:01.061 "esnap_clone": false 00:32:01.061 } 00:32:01.061 } 00:32:01.061 } 00:32:01.061 ] 00:32:01.061 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:32:01.061 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:32:01.061 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:01.322 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:01.322 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:01.322 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:32:01.583 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:01.583 04:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d406988d-bfe2-45b3-b366-fbd245735dbb 00:32:01.583 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49dc5de9-0ab7-487a-a6ff-326b6c6a61eb 00:32:01.845 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:02.106 00:32:02.106 real 0m15.658s 00:32:02.106 user 0m15.295s 00:32:02.106 sys 0m1.361s 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:02.106 ************************************ 00:32:02.106 END TEST lvs_grow_clean 00:32:02.106 ************************************ 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:02.106 ************************************ 00:32:02.106 START TEST lvs_grow_dirty 00:32:02.106 ************************************ 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:02.106 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:02.367 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:02.367 04:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:02.627 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8318d7c8-d582-4288-b368-28f95a56f98f 00:32:02.627 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:02.627 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:02.627 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:02.627 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:02.627 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8318d7c8-d582-4288-b368-28f95a56f98f lvol 150 00:32:02.887 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6f1ad712-402e-410a-8fb8-4bf7853e80bc 00:32:02.887 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:02.887 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:03.148 [2024-11-05 04:43:16.556168] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:03.148 [2024-11-05 04:43:16.556236] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:03.148 true 00:32:03.148 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:03.148 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:03.148 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:03.148 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:03.410 04:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6f1ad712-402e-410a-8fb8-4bf7853e80bc 00:32:03.672 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:03.672 [2024-11-05 04:43:17.208486] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:03.672 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3223140 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3223140 /var/tmp/bdevperf.sock 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3223140 ']' 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:03.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:03.933 04:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:03.933 [2024-11-05 04:43:17.424279] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:32:03.933 [2024-11-05 04:43:17.424337] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3223140 ] 00:32:03.933 [2024-11-05 04:43:17.507456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.933 [2024-11-05 04:43:17.537219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:04.874 04:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:04.874 04:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:04.874 04:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:05.134 Nvme0n1 00:32:05.135 04:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:05.135 [ 00:32:05.135 { 00:32:05.135 "name": "Nvme0n1", 00:32:05.135 "aliases": [ 00:32:05.135 "6f1ad712-402e-410a-8fb8-4bf7853e80bc" 00:32:05.135 ], 00:32:05.135 "product_name": "NVMe disk", 00:32:05.135 "block_size": 4096, 00:32:05.135 "num_blocks": 38912, 00:32:05.135 "uuid": "6f1ad712-402e-410a-8fb8-4bf7853e80bc", 00:32:05.135 "numa_id": 0, 00:32:05.135 "assigned_rate_limits": { 00:32:05.135 "rw_ios_per_sec": 0, 00:32:05.135 "rw_mbytes_per_sec": 0, 00:32:05.135 "r_mbytes_per_sec": 0, 00:32:05.135 "w_mbytes_per_sec": 0 00:32:05.135 }, 00:32:05.135 "claimed": false, 00:32:05.135 "zoned": false, 00:32:05.135 "supported_io_types": { 00:32:05.135 "read": true, 00:32:05.135 "write": true, 00:32:05.135 "unmap": true, 00:32:05.135 "flush": true, 00:32:05.135 "reset": true, 00:32:05.135 "nvme_admin": true, 00:32:05.135 "nvme_io": true, 00:32:05.135 "nvme_io_md": false, 00:32:05.135 "write_zeroes": true, 00:32:05.135 "zcopy": false, 00:32:05.135 "get_zone_info": false, 00:32:05.135 "zone_management": false, 00:32:05.135 "zone_append": false, 00:32:05.135 "compare": true, 00:32:05.135 "compare_and_write": true, 00:32:05.135 "abort": true, 00:32:05.135 "seek_hole": false, 00:32:05.135 "seek_data": false, 00:32:05.135 "copy": true, 00:32:05.135 "nvme_iov_md": false 00:32:05.135 }, 00:32:05.135 "memory_domains": [ 00:32:05.135 { 00:32:05.135 "dma_device_id": "system", 00:32:05.135 "dma_device_type": 1 00:32:05.135 } 00:32:05.135 ], 00:32:05.135 "driver_specific": { 00:32:05.135 "nvme": [ 00:32:05.135 { 00:32:05.135 "trid": { 00:32:05.135 "trtype": "TCP", 00:32:05.135 "adrfam": "IPv4", 00:32:05.135 "traddr": "10.0.0.2", 00:32:05.135 "trsvcid": "4420", 00:32:05.135 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:05.135 }, 00:32:05.135 "ctrlr_data": { 00:32:05.135 "cntlid": 1, 00:32:05.135 "vendor_id": "0x8086", 00:32:05.135 "model_number": "SPDK bdev Controller", 00:32:05.135 "serial_number": "SPDK0", 00:32:05.135 "firmware_revision": "25.01", 00:32:05.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:05.135 "oacs": { 00:32:05.135 "security": 0, 00:32:05.135 "format": 0, 00:32:05.135 "firmware": 0, 00:32:05.135 "ns_manage": 0 00:32:05.135 }, 00:32:05.135 "multi_ctrlr": true, 00:32:05.135 "ana_reporting": false 00:32:05.135 }, 00:32:05.135 "vs": { 00:32:05.135 "nvme_version": "1.3" 00:32:05.135 }, 00:32:05.135 "ns_data": { 00:32:05.135 "id": 1, 00:32:05.135 "can_share": true 00:32:05.135 } 00:32:05.135 } 00:32:05.135 ], 00:32:05.135 "mp_policy": "active_passive" 00:32:05.135 } 00:32:05.135 } 00:32:05.135 ] 00:32:05.396 04:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3223324 00:32:05.396 04:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:05.397 04:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:05.397 Running I/O for 10 seconds... 00:32:06.339 Latency(us) 00:32:06.339 [2024-11-05T03:43:19.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.339 Nvme0n1 : 1.00 17712.00 69.19 0.00 0.00 0.00 0.00 0.00 00:32:06.339 [2024-11-05T03:43:19.979Z] =================================================================================================================== 00:32:06.339 [2024-11-05T03:43:19.979Z] Total : 17712.00 69.19 0.00 0.00 0.00 0.00 0.00 00:32:06.339 00:32:07.282 04:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:07.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.282 Nvme0n1 : 2.00 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:32:07.282 [2024-11-05T03:43:20.922Z] =================================================================================================================== 00:32:07.282 [2024-11-05T03:43:20.922Z] Total : 17780.00 69.45 0.00 0.00 0.00 0.00 0.00 00:32:07.282 00:32:07.543 true 00:32:07.543 04:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:07.543 04:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:07.543 04:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:07.543 04:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:07.543 04:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3223324 00:32:08.486 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.486 Nvme0n1 : 3.00 17784.33 69.47 0.00 0.00 0.00 0.00 0.00 00:32:08.486 [2024-11-05T03:43:22.126Z] =================================================================================================================== 00:32:08.486 [2024-11-05T03:43:22.126Z] Total : 17784.33 69.47 0.00 0.00 0.00 0.00 0.00 00:32:08.486 00:32:09.428 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.428 Nvme0n1 : 4.00 17806.25 69.56 0.00 0.00 0.00 0.00 0.00 00:32:09.428 [2024-11-05T03:43:23.068Z] =================================================================================================================== 00:32:09.428 [2024-11-05T03:43:23.068Z] Total : 17806.25 69.56 0.00 0.00 0.00 0.00 0.00 00:32:09.428 00:32:10.368 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.368 Nvme0n1 : 5.00 17825.60 69.63 0.00 0.00 0.00 0.00 0.00 00:32:10.368 [2024-11-05T03:43:24.008Z] =================================================================================================================== 00:32:10.368 [2024-11-05T03:43:24.008Z] Total : 17825.60 69.63 0.00 0.00 0.00 0.00 0.00 00:32:10.368 00:32:11.309 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.309 Nvme0n1 : 6.00 17841.33 69.69 0.00 0.00 0.00 0.00 0.00 00:32:11.309 [2024-11-05T03:43:24.949Z] =================================================================================================================== 00:32:11.309 [2024-11-05T03:43:24.949Z] Total : 17841.33 69.69 0.00 0.00 0.00 0.00 0.00 00:32:11.309 00:32:12.250 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.250 Nvme0n1 : 7.00 17852.57 69.74 0.00 0.00 0.00 0.00 0.00 00:32:12.250 [2024-11-05T03:43:25.890Z] =================================================================================================================== 00:32:12.250 [2024-11-05T03:43:25.890Z] Total : 17852.57 69.74 0.00 0.00 0.00 0.00 0.00 00:32:12.250 00:32:13.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:13.631 Nvme0n1 : 8.00 17861.00 69.77 0.00 0.00 0.00 0.00 0.00 00:32:13.631 [2024-11-05T03:43:27.271Z] =================================================================================================================== 00:32:13.631 [2024-11-05T03:43:27.271Z] Total : 17861.00 69.77 0.00 0.00 0.00 0.00 0.00 00:32:13.631 00:32:14.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:14.573 Nvme0n1 : 9.00 17867.56 69.80 0.00 0.00 0.00 0.00 0.00 00:32:14.573 [2024-11-05T03:43:28.213Z] =================================================================================================================== 00:32:14.573 [2024-11-05T03:43:28.213Z] Total : 17867.56 69.80 0.00 0.00 0.00 0.00 0.00 00:32:14.573 00:32:15.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.513 Nvme0n1 : 10.00 17879.20 69.84 0.00 0.00 0.00 0.00 0.00 00:32:15.513 [2024-11-05T03:43:29.153Z] =================================================================================================================== 00:32:15.513 [2024-11-05T03:43:29.153Z] Total : 17879.20 69.84 0.00 0.00 0.00 0.00 0.00 00:32:15.513 00:32:15.513 00:32:15.513 Latency(us) 00:32:15.513 [2024-11-05T03:43:29.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.513 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:15.513 Nvme0n1 : 10.00 17878.02 69.84 0.00 0.00 7155.01 1624.75 12997.97 00:32:15.513 [2024-11-05T03:43:29.153Z] =================================================================================================================== 00:32:15.513 [2024-11-05T03:43:29.153Z] Total : 17878.02 69.84 0.00 0.00 7155.01 1624.75 12997.97 00:32:15.513 { 00:32:15.513 "results": [ 00:32:15.513 { 00:32:15.513 "job": "Nvme0n1", 00:32:15.513 "core_mask": "0x2", 00:32:15.513 "workload": "randwrite", 00:32:15.513 "status": "finished", 00:32:15.513 "queue_depth": 128, 00:32:15.513 "io_size": 4096, 00:32:15.513 "runtime": 10.004242, 00:32:15.513 "iops": 17878.016145551057, 00:32:15.513 "mibps": 69.83600056855882, 00:32:15.513 "io_failed": 0, 00:32:15.513 "io_timeout": 0, 00:32:15.513 "avg_latency_us": 7155.005703210031, 00:32:15.513 "min_latency_us": 1624.7466666666667, 00:32:15.513 "max_latency_us": 12997.973333333333 00:32:15.513 } 00:32:15.513 ], 00:32:15.513 "core_count": 1 00:32:15.513 } 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3223140 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 3223140 ']' 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 3223140 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3223140 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3223140' 00:32:15.513 killing process with pid 3223140 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 3223140 00:32:15.513 Received shutdown signal, test time was about 10.000000 seconds 00:32:15.513 00:32:15.513 Latency(us) 00:32:15.513 [2024-11-05T03:43:29.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:15.513 [2024-11-05T03:43:29.153Z] =================================================================================================================== 00:32:15.513 [2024-11-05T03:43:29.153Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:15.513 04:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 3223140 00:32:15.513 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:15.773 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3219515 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3219515 00:32:16.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3219515 Killed "${NVMF_APP[@]}" "$@" 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=3225441 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 3225441 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 3225441 ']' 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:16.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:16.034 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:16.294 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:16.294 04:43:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:16.294 [2024-11-05 04:43:29.721518] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:16.294 [2024-11-05 04:43:29.722520] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:32:16.294 [2024-11-05 04:43:29.722563] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:16.294 [2024-11-05 04:43:29.799634] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:16.294 [2024-11-05 04:43:29.834772] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:16.294 [2024-11-05 04:43:29.834804] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:16.294 [2024-11-05 04:43:29.834811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:16.294 [2024-11-05 04:43:29.834818] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:16.294 [2024-11-05 04:43:29.834824] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:16.294 [2024-11-05 04:43:29.835357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:16.294 [2024-11-05 04:43:29.889579] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:16.294 [2024-11-05 04:43:29.889837] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:17.234 [2024-11-05 04:43:30.709992] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:17.234 [2024-11-05 04:43:30.710090] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:17.234 [2024-11-05 04:43:30.710122] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6f1ad712-402e-410a-8fb8-4bf7853e80bc 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6f1ad712-402e-410a-8fb8-4bf7853e80bc 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:17.234 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:17.494 04:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f1ad712-402e-410a-8fb8-4bf7853e80bc -t 2000 00:32:17.494 [ 00:32:17.494 { 00:32:17.494 "name": "6f1ad712-402e-410a-8fb8-4bf7853e80bc", 00:32:17.494 "aliases": [ 00:32:17.494 "lvs/lvol" 00:32:17.494 ], 00:32:17.494 "product_name": "Logical Volume", 00:32:17.494 "block_size": 4096, 00:32:17.494 "num_blocks": 38912, 00:32:17.494 "uuid": "6f1ad712-402e-410a-8fb8-4bf7853e80bc", 00:32:17.494 "assigned_rate_limits": { 00:32:17.494 "rw_ios_per_sec": 0, 00:32:17.494 "rw_mbytes_per_sec": 0, 00:32:17.494 "r_mbytes_per_sec": 0, 00:32:17.494 "w_mbytes_per_sec": 0 00:32:17.494 }, 00:32:17.494 "claimed": false, 00:32:17.494 "zoned": false, 00:32:17.494 "supported_io_types": { 00:32:17.494 "read": true, 00:32:17.494 "write": true, 00:32:17.494 "unmap": true, 00:32:17.494 "flush": false, 00:32:17.494 "reset": true, 00:32:17.494 "nvme_admin": false, 00:32:17.494 "nvme_io": false, 00:32:17.494 "nvme_io_md": false, 00:32:17.494 "write_zeroes": true, 00:32:17.494 "zcopy": false, 00:32:17.494 "get_zone_info": false, 00:32:17.494 "zone_management": false, 00:32:17.494 "zone_append": false, 00:32:17.494 "compare": false, 00:32:17.494 "compare_and_write": false, 00:32:17.494 "abort": false, 00:32:17.494 "seek_hole": true, 00:32:17.494 "seek_data": true, 00:32:17.494 "copy": false, 00:32:17.494 "nvme_iov_md": false 00:32:17.494 }, 00:32:17.494 "driver_specific": { 00:32:17.494 "lvol": { 00:32:17.494 "lvol_store_uuid": "8318d7c8-d582-4288-b368-28f95a56f98f", 00:32:17.494 "base_bdev": "aio_bdev", 00:32:17.494 "thin_provision": false, 00:32:17.494 "num_allocated_clusters": 38, 00:32:17.494 "snapshot": false, 00:32:17.494 "clone": false, 00:32:17.494 "esnap_clone": false 00:32:17.494 } 00:32:17.494 } 00:32:17.494 } 00:32:17.494 ] 00:32:17.494 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:17.494 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:17.494 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:17.755 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:17.755 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:17.755 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:18.015 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:18.015 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:18.015 [2024-11-05 04:43:31.571964] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:18.015 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:18.015 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:18.016 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:18.337 request: 00:32:18.337 { 00:32:18.337 "uuid": "8318d7c8-d582-4288-b368-28f95a56f98f", 00:32:18.337 "method": "bdev_lvol_get_lvstores", 00:32:18.337 "req_id": 1 00:32:18.337 } 00:32:18.337 Got JSON-RPC error response 00:32:18.337 response: 00:32:18.337 { 00:32:18.337 "code": -19, 00:32:18.337 "message": "No such device" 00:32:18.337 } 00:32:18.337 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:18.337 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:18.337 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:18.337 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:18.337 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:18.668 aio_bdev 00:32:18.668 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6f1ad712-402e-410a-8fb8-4bf7853e80bc 00:32:18.668 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6f1ad712-402e-410a-8fb8-4bf7853e80bc 00:32:18.668 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:18.668 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:18.668 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:18.668 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:18.668 04:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:18.668 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6f1ad712-402e-410a-8fb8-4bf7853e80bc -t 2000 00:32:19.007 [ 00:32:19.007 { 00:32:19.007 "name": "6f1ad712-402e-410a-8fb8-4bf7853e80bc", 00:32:19.007 "aliases": [ 00:32:19.007 "lvs/lvol" 00:32:19.007 ], 00:32:19.007 "product_name": "Logical Volume", 00:32:19.007 "block_size": 4096, 00:32:19.007 "num_blocks": 38912, 00:32:19.007 "uuid": "6f1ad712-402e-410a-8fb8-4bf7853e80bc", 00:32:19.007 "assigned_rate_limits": { 00:32:19.007 "rw_ios_per_sec": 0, 00:32:19.007 "rw_mbytes_per_sec": 0, 00:32:19.007 "r_mbytes_per_sec": 0, 00:32:19.007 "w_mbytes_per_sec": 0 00:32:19.007 }, 00:32:19.007 "claimed": false, 00:32:19.007 "zoned": false, 00:32:19.007 "supported_io_types": { 00:32:19.007 "read": true, 00:32:19.007 "write": true, 00:32:19.007 "unmap": true, 00:32:19.007 "flush": false, 00:32:19.007 "reset": true, 00:32:19.007 "nvme_admin": false, 00:32:19.007 "nvme_io": false, 00:32:19.007 "nvme_io_md": false, 00:32:19.007 "write_zeroes": true, 00:32:19.007 "zcopy": false, 00:32:19.007 "get_zone_info": false, 00:32:19.007 "zone_management": false, 00:32:19.007 "zone_append": false, 00:32:19.007 "compare": false, 00:32:19.007 "compare_and_write": false, 00:32:19.007 "abort": false, 00:32:19.007 "seek_hole": true, 00:32:19.007 "seek_data": true, 00:32:19.007 "copy": false, 00:32:19.007 "nvme_iov_md": false 00:32:19.007 }, 00:32:19.007 "driver_specific": { 00:32:19.007 "lvol": { 00:32:19.007 "lvol_store_uuid": "8318d7c8-d582-4288-b368-28f95a56f98f", 00:32:19.007 "base_bdev": "aio_bdev", 00:32:19.007 "thin_provision": false, 00:32:19.007 "num_allocated_clusters": 38, 00:32:19.007 "snapshot": false, 00:32:19.007 "clone": false, 00:32:19.007 "esnap_clone": false 00:32:19.007 } 00:32:19.007 } 00:32:19.007 } 00:32:19.007 ] 00:32:19.007 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:19.007 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:19.007 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:19.007 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:19.007 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:19.007 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:19.325 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:19.325 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6f1ad712-402e-410a-8fb8-4bf7853e80bc 00:32:19.325 04:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8318d7c8-d582-4288-b368-28f95a56f98f 00:32:19.610 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:19.610 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:19.872 00:32:19.872 real 0m17.611s 00:32:19.872 user 0m35.365s 00:32:19.872 sys 0m3.021s 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:19.872 ************************************ 00:32:19.872 END TEST lvs_grow_dirty 00:32:19.872 ************************************ 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:19.872 nvmf_trace.0 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.872 rmmod nvme_tcp 00:32:19.872 rmmod nvme_fabrics 00:32:19.872 rmmod nvme_keyring 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 3225441 ']' 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 3225441 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 3225441 ']' 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 3225441 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:19.872 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3225441 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3225441' 00:32:20.133 killing process with pid 3225441 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 3225441 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 3225441 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.133 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.134 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.134 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.134 04:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.682 00:32:22.682 real 0m44.463s 00:32:22.682 user 0m53.611s 00:32:22.682 sys 0m10.299s 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:22.682 ************************************ 00:32:22.682 END TEST nvmf_lvs_grow 00:32:22.682 ************************************ 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:22.682 ************************************ 00:32:22.682 START TEST nvmf_bdev_io_wait 00:32:22.682 ************************************ 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:22.682 * Looking for test storage... 00:32:22.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:22.682 04:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:22.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.682 --rc genhtml_branch_coverage=1 00:32:22.682 --rc genhtml_function_coverage=1 00:32:22.682 --rc genhtml_legend=1 00:32:22.682 --rc geninfo_all_blocks=1 00:32:22.682 --rc geninfo_unexecuted_blocks=1 00:32:22.682 00:32:22.682 ' 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:22.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.682 --rc genhtml_branch_coverage=1 00:32:22.682 --rc genhtml_function_coverage=1 00:32:22.682 --rc genhtml_legend=1 00:32:22.682 --rc geninfo_all_blocks=1 00:32:22.682 --rc geninfo_unexecuted_blocks=1 00:32:22.682 00:32:22.682 ' 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:22.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.682 --rc genhtml_branch_coverage=1 00:32:22.682 --rc genhtml_function_coverage=1 00:32:22.682 --rc genhtml_legend=1 00:32:22.682 --rc geninfo_all_blocks=1 00:32:22.682 --rc geninfo_unexecuted_blocks=1 00:32:22.682 00:32:22.682 ' 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:22.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.682 --rc genhtml_branch_coverage=1 00:32:22.682 --rc genhtml_function_coverage=1 00:32:22.682 --rc genhtml_legend=1 00:32:22.682 --rc geninfo_all_blocks=1 00:32:22.682 --rc geninfo_unexecuted_blocks=1 00:32:22.682 00:32:22.682 ' 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.682 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:22.683 04:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:30.831 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:30.831 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:30.831 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:30.831 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.831 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:30.832 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.832 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:32:30.832 00:32:30.832 --- 10.0.0.2 ping statistics --- 00:32:30.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.832 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:32:30.832 00:32:30.832 --- 10.0.0.1 ping statistics --- 00:32:30.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.832 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=3230394 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 3230394 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 3230394 ']' 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:30.832 04:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:30.832 [2024-11-05 04:43:43.577196] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.832 [2024-11-05 04:43:43.578228] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:32:30.832 [2024-11-05 04:43:43.578271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.832 [2024-11-05 04:43:43.660303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:30.832 [2024-11-05 04:43:43.699593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.832 [2024-11-05 04:43:43.699628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.832 [2024-11-05 04:43:43.699636] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.832 [2024-11-05 04:43:43.699643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.832 [2024-11-05 04:43:43.699649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.832 [2024-11-05 04:43:43.701378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.832 [2024-11-05 04:43:43.701495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:30.832 [2024-11-05 04:43:43.701650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.832 [2024-11-05 04:43:43.701651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:30.832 [2024-11-05 04:43:43.701919] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.832 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.094 [2024-11-05 04:43:44.501156] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:31.094 [2024-11-05 04:43:44.501460] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:31.094 [2024-11-05 04:43:44.502181] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:31.094 [2024-11-05 04:43:44.502225] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.094 [2024-11-05 04:43:44.514126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.094 Malloc0 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:31.094 [2024-11-05 04:43:44.578376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3230673 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3230676 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:31.094 { 00:32:31.094 "params": { 00:32:31.094 "name": "Nvme$subsystem", 00:32:31.094 "trtype": "$TEST_TRANSPORT", 00:32:31.094 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.094 "adrfam": "ipv4", 00:32:31.094 "trsvcid": "$NVMF_PORT", 00:32:31.094 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.094 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.094 "hdgst": ${hdgst:-false}, 00:32:31.094 "ddgst": ${ddgst:-false} 00:32:31.094 }, 00:32:31.094 "method": "bdev_nvme_attach_controller" 00:32:31.094 } 00:32:31.094 EOF 00:32:31.094 )") 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3230678 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:31.094 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:31.095 { 00:32:31.095 "params": { 00:32:31.095 "name": "Nvme$subsystem", 00:32:31.095 "trtype": "$TEST_TRANSPORT", 00:32:31.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.095 "adrfam": "ipv4", 00:32:31.095 "trsvcid": "$NVMF_PORT", 00:32:31.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.095 "hdgst": ${hdgst:-false}, 00:32:31.095 "ddgst": ${ddgst:-false} 00:32:31.095 }, 00:32:31.095 "method": "bdev_nvme_attach_controller" 00:32:31.095 } 00:32:31.095 EOF 00:32:31.095 )") 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3230682 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:31.095 { 00:32:31.095 "params": { 00:32:31.095 "name": "Nvme$subsystem", 00:32:31.095 "trtype": "$TEST_TRANSPORT", 00:32:31.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.095 "adrfam": "ipv4", 00:32:31.095 "trsvcid": "$NVMF_PORT", 00:32:31.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.095 "hdgst": ${hdgst:-false}, 00:32:31.095 "ddgst": ${ddgst:-false} 00:32:31.095 }, 00:32:31.095 "method": "bdev_nvme_attach_controller" 00:32:31.095 } 00:32:31.095 EOF 00:32:31.095 )") 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:31.095 { 00:32:31.095 "params": { 00:32:31.095 "name": "Nvme$subsystem", 00:32:31.095 "trtype": "$TEST_TRANSPORT", 00:32:31.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:31.095 "adrfam": "ipv4", 00:32:31.095 "trsvcid": "$NVMF_PORT", 00:32:31.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:31.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:31.095 "hdgst": ${hdgst:-false}, 00:32:31.095 "ddgst": ${ddgst:-false} 00:32:31.095 }, 00:32:31.095 "method": "bdev_nvme_attach_controller" 00:32:31.095 } 00:32:31.095 EOF 00:32:31.095 )") 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3230673 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:31.095 "params": { 00:32:31.095 "name": "Nvme1", 00:32:31.095 "trtype": "tcp", 00:32:31.095 "traddr": "10.0.0.2", 00:32:31.095 "adrfam": "ipv4", 00:32:31.095 "trsvcid": "4420", 00:32:31.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:31.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:31.095 "hdgst": false, 00:32:31.095 "ddgst": false 00:32:31.095 }, 00:32:31.095 "method": "bdev_nvme_attach_controller" 00:32:31.095 }' 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:31.095 "params": { 00:32:31.095 "name": "Nvme1", 00:32:31.095 "trtype": "tcp", 00:32:31.095 "traddr": "10.0.0.2", 00:32:31.095 "adrfam": "ipv4", 00:32:31.095 "trsvcid": "4420", 00:32:31.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:31.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:31.095 "hdgst": false, 00:32:31.095 "ddgst": false 00:32:31.095 }, 00:32:31.095 "method": "bdev_nvme_attach_controller" 00:32:31.095 }' 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:31.095 "params": { 00:32:31.095 "name": "Nvme1", 00:32:31.095 "trtype": "tcp", 00:32:31.095 "traddr": "10.0.0.2", 00:32:31.095 "adrfam": "ipv4", 00:32:31.095 "trsvcid": "4420", 00:32:31.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:31.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:31.095 "hdgst": false, 00:32:31.095 "ddgst": false 00:32:31.095 }, 00:32:31.095 "method": "bdev_nvme_attach_controller" 00:32:31.095 }' 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:31.095 04:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:31.095 "params": { 00:32:31.095 "name": "Nvme1", 00:32:31.095 "trtype": "tcp", 00:32:31.095 "traddr": "10.0.0.2", 00:32:31.095 "adrfam": "ipv4", 00:32:31.095 "trsvcid": "4420", 00:32:31.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:31.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:31.095 "hdgst": false, 00:32:31.095 "ddgst": false 00:32:31.095 }, 00:32:31.095 "method": "bdev_nvme_attach_controller" 00:32:31.095 }' 00:32:31.095 [2024-11-05 04:43:44.632760] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:32:31.095 [2024-11-05 04:43:44.632816] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:32:31.095 [2024-11-05 04:43:44.634095] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:32:31.095 [2024-11-05 04:43:44.634147] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:31.095 [2024-11-05 04:43:44.638829] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:32:31.095 [2024-11-05 04:43:44.638877] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:31.095 [2024-11-05 04:43:44.650359] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:32:31.095 [2024-11-05 04:43:44.650426] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:32:31.356 [2024-11-05 04:43:44.780286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.356 [2024-11-05 04:43:44.809213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:31.356 [2024-11-05 04:43:44.838317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.356 [2024-11-05 04:43:44.866323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:31.356 [2024-11-05 04:43:44.896552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.356 [2024-11-05 04:43:44.926356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:31.356 [2024-11-05 04:43:44.943879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:31.356 [2024-11-05 04:43:44.971596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:31.616 Running I/O for 1 seconds... 00:32:31.616 Running I/O for 1 seconds... 00:32:31.616 Running I/O for 1 seconds... 00:32:31.616 Running I/O for 1 seconds... 00:32:32.555 12486.00 IOPS, 48.77 MiB/s [2024-11-05T03:43:46.195Z] 12344.00 IOPS, 48.22 MiB/s 00:32:32.555 Latency(us) 00:32:32.555 [2024-11-05T03:43:46.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.555 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:32.555 Nvme1n1 : 1.01 12548.01 49.02 0.00 0.00 10166.03 1993.39 13434.88 00:32:32.555 [2024-11-05T03:43:46.195Z] =================================================================================================================== 00:32:32.555 [2024-11-05T03:43:46.195Z] Total : 12548.01 49.02 0.00 0.00 10166.03 1993.39 13434.88 00:32:32.555 00:32:32.555 Latency(us) 00:32:32.555 [2024-11-05T03:43:46.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.555 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:32.555 Nvme1n1 : 1.01 12406.19 48.46 0.00 0.00 10285.04 4915.20 16274.77 00:32:32.555 [2024-11-05T03:43:46.195Z] =================================================================================================================== 00:32:32.555 [2024-11-05T03:43:46.195Z] Total : 12406.19 48.46 0.00 0.00 10285.04 4915.20 16274.77 00:32:32.555 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3230676 00:32:32.555 16511.00 IOPS, 64.50 MiB/s 00:32:32.555 Latency(us) 00:32:32.555 [2024-11-05T03:43:46.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.556 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:32.556 Nvme1n1 : 1.01 16563.85 64.70 0.00 0.00 7710.15 2812.59 12615.68 00:32:32.556 [2024-11-05T03:43:46.196Z] =================================================================================================================== 00:32:32.556 [2024-11-05T03:43:46.196Z] Total : 16563.85 64.70 0.00 0.00 7710.15 2812.59 12615.68 00:32:32.817 188072.00 IOPS, 734.66 MiB/s 00:32:32.817 Latency(us) 00:32:32.817 [2024-11-05T03:43:46.457Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:32.817 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:32.817 Nvme1n1 : 1.00 187697.66 733.19 0.00 0.00 677.60 300.37 1966.08 00:32:32.817 [2024-11-05T03:43:46.457Z] =================================================================================================================== 00:32:32.817 [2024-11-05T03:43:46.457Z] Total : 187697.66 733.19 0.00 0.00 677.60 300.37 1966.08 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3230678 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3230682 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:32.817 rmmod nvme_tcp 00:32:32.817 rmmod nvme_fabrics 00:32:32.817 rmmod nvme_keyring 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:32.817 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:32.818 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 3230394 ']' 00:32:32.818 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 3230394 00:32:32.818 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 3230394 ']' 00:32:32.818 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 3230394 00:32:32.818 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:32:32.818 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:32.818 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3230394 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3230394' 00:32:33.078 killing process with pid 3230394 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 3230394 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 3230394 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:33.078 04:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:35.630 00:32:35.630 real 0m12.840s 00:32:35.630 user 0m14.858s 00:32:35.630 sys 0m7.505s 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:35.630 ************************************ 00:32:35.630 END TEST nvmf_bdev_io_wait 00:32:35.630 ************************************ 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:35.630 ************************************ 00:32:35.630 START TEST nvmf_queue_depth 00:32:35.630 ************************************ 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:35.630 * Looking for test storage... 00:32:35.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.630 --rc genhtml_branch_coverage=1 00:32:35.630 --rc genhtml_function_coverage=1 00:32:35.630 --rc genhtml_legend=1 00:32:35.630 --rc geninfo_all_blocks=1 00:32:35.630 --rc geninfo_unexecuted_blocks=1 00:32:35.630 00:32:35.630 ' 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.630 --rc genhtml_branch_coverage=1 00:32:35.630 --rc genhtml_function_coverage=1 00:32:35.630 --rc genhtml_legend=1 00:32:35.630 --rc geninfo_all_blocks=1 00:32:35.630 --rc geninfo_unexecuted_blocks=1 00:32:35.630 00:32:35.630 ' 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.630 --rc genhtml_branch_coverage=1 00:32:35.630 --rc genhtml_function_coverage=1 00:32:35.630 --rc genhtml_legend=1 00:32:35.630 --rc geninfo_all_blocks=1 00:32:35.630 --rc geninfo_unexecuted_blocks=1 00:32:35.630 00:32:35.630 ' 00:32:35.630 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:35.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.630 --rc genhtml_branch_coverage=1 00:32:35.630 --rc genhtml_function_coverage=1 00:32:35.630 --rc genhtml_legend=1 00:32:35.631 --rc geninfo_all_blocks=1 00:32:35.631 --rc geninfo_unexecuted_blocks=1 00:32:35.631 00:32:35.631 ' 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:35.631 04:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:42.227 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:42.228 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:42.228 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:42.228 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:42.228 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:42.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.633 ms 00:32:42.228 00:32:42.228 --- 10.0.0.2 ping statistics --- 00:32:42.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.228 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:32:42.228 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:42.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:32:42.489 00:32:42.490 --- 10.0.0.1 ping statistics --- 00:32:42.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.490 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=3235105 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 3235105 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3235105 ']' 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:42.490 04:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:42.490 [2024-11-05 04:43:55.965780] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:42.490 [2024-11-05 04:43:55.966912] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:32:42.490 [2024-11-05 04:43:55.966963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.490 [2024-11-05 04:43:56.068057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:42.490 [2024-11-05 04:43:56.118447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.490 [2024-11-05 04:43:56.118498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.490 [2024-11-05 04:43:56.118507] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.490 [2024-11-05 04:43:56.118514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.490 [2024-11-05 04:43:56.118520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.490 [2024-11-05 04:43:56.119265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.751 [2024-11-05 04:43:56.195138] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:42.751 [2024-11-05 04:43:56.195430] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.323 [2024-11-05 04:43:56.816124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.323 Malloc0 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.323 [2024-11-05 04:43:56.880299] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3235153 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3235153 /var/tmp/bdevperf.sock 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 3235153 ']' 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:43.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.323 04:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:43.323 [2024-11-05 04:43:56.935921] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:32:43.323 [2024-11-05 04:43:56.935996] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3235153 ] 00:32:43.585 [2024-11-05 04:43:57.013493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.585 [2024-11-05 04:43:57.055659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.156 04:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:44.156 04:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:44.156 04:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:44.156 04:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:44.156 04:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:44.416 NVMe0n1 00:32:44.416 04:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:44.416 04:43:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:44.416 Running I/O for 10 seconds... 00:32:46.301 9216.00 IOPS, 36.00 MiB/s [2024-11-05T03:44:01.325Z] 9235.00 IOPS, 36.07 MiB/s [2024-11-05T03:44:02.266Z] 9499.33 IOPS, 37.11 MiB/s [2024-11-05T03:44:03.209Z] 10123.00 IOPS, 39.54 MiB/s [2024-11-05T03:44:04.151Z] 10544.20 IOPS, 41.19 MiB/s [2024-11-05T03:44:05.091Z] 10828.33 IOPS, 42.30 MiB/s [2024-11-05T03:44:06.032Z] 11098.29 IOPS, 43.35 MiB/s [2024-11-05T03:44:06.974Z] 11277.38 IOPS, 44.05 MiB/s [2024-11-05T03:44:08.359Z] 11451.00 IOPS, 44.73 MiB/s [2024-11-05T03:44:08.359Z] 11568.50 IOPS, 45.19 MiB/s 00:32:54.719 Latency(us) 00:32:54.719 [2024-11-05T03:44:08.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.719 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:54.719 Verification LBA range: start 0x0 length 0x4000 00:32:54.719 NVMe0n1 : 10.07 11588.29 45.27 0.00 0.00 88035.09 24685.23 65536.00 00:32:54.719 [2024-11-05T03:44:08.359Z] =================================================================================================================== 00:32:54.719 [2024-11-05T03:44:08.359Z] Total : 11588.29 45.27 0.00 0.00 88035.09 24685.23 65536.00 00:32:54.719 { 00:32:54.719 "results": [ 00:32:54.719 { 00:32:54.719 "job": "NVMe0n1", 00:32:54.719 "core_mask": "0x1", 00:32:54.719 "workload": "verify", 00:32:54.719 "status": "finished", 00:32:54.719 "verify_range": { 00:32:54.719 "start": 0, 00:32:54.719 "length": 16384 00:32:54.719 }, 00:32:54.719 "queue_depth": 1024, 00:32:54.719 "io_size": 4096, 00:32:54.719 "runtime": 10.066716, 00:32:54.719 "iops": 11588.28758057742, 00:32:54.719 "mibps": 45.26674836163055, 00:32:54.719 "io_failed": 0, 00:32:54.719 "io_timeout": 0, 00:32:54.719 "avg_latency_us": 88035.08942714761, 00:32:54.719 "min_latency_us": 24685.226666666666, 00:32:54.719 "max_latency_us": 65536.0 00:32:54.719 } 00:32:54.719 ], 00:32:54.719 "core_count": 1 00:32:54.719 } 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3235153 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3235153 ']' 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3235153 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3235153 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3235153' 00:32:54.719 killing process with pid 3235153 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3235153 00:32:54.719 Received shutdown signal, test time was about 10.000000 seconds 00:32:54.719 00:32:54.719 Latency(us) 00:32:54.719 [2024-11-05T03:44:08.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.719 [2024-11-05T03:44:08.359Z] =================================================================================================================== 00:32:54.719 [2024-11-05T03:44:08.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3235153 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:54.719 rmmod nvme_tcp 00:32:54.719 rmmod nvme_fabrics 00:32:54.719 rmmod nvme_keyring 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 3235105 ']' 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 3235105 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 3235105 ']' 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 3235105 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3235105 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3235105' 00:32:54.719 killing process with pid 3235105 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 3235105 00:32:54.719 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 3235105 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:54.980 04:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.526 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:57.526 00:32:57.526 real 0m21.804s 00:32:57.526 user 0m24.319s 00:32:57.526 sys 0m7.035s 00:32:57.526 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:57.527 ************************************ 00:32:57.527 END TEST nvmf_queue_depth 00:32:57.527 ************************************ 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:57.527 ************************************ 00:32:57.527 START TEST nvmf_target_multipath 00:32:57.527 ************************************ 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:57.527 * Looking for test storage... 00:32:57.527 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:57.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.527 --rc genhtml_branch_coverage=1 00:32:57.527 --rc genhtml_function_coverage=1 00:32:57.527 --rc genhtml_legend=1 00:32:57.527 --rc geninfo_all_blocks=1 00:32:57.527 --rc geninfo_unexecuted_blocks=1 00:32:57.527 00:32:57.527 ' 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:57.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.527 --rc genhtml_branch_coverage=1 00:32:57.527 --rc genhtml_function_coverage=1 00:32:57.527 --rc genhtml_legend=1 00:32:57.527 --rc geninfo_all_blocks=1 00:32:57.527 --rc geninfo_unexecuted_blocks=1 00:32:57.527 00:32:57.527 ' 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:57.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.527 --rc genhtml_branch_coverage=1 00:32:57.527 --rc genhtml_function_coverage=1 00:32:57.527 --rc genhtml_legend=1 00:32:57.527 --rc geninfo_all_blocks=1 00:32:57.527 --rc geninfo_unexecuted_blocks=1 00:32:57.527 00:32:57.527 ' 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:57.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:57.527 --rc genhtml_branch_coverage=1 00:32:57.527 --rc genhtml_function_coverage=1 00:32:57.527 --rc genhtml_legend=1 00:32:57.527 --rc geninfo_all_blocks=1 00:32:57.527 --rc geninfo_unexecuted_blocks=1 00:32:57.527 00:32:57.527 ' 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:57.527 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:57.528 04:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:04.218 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:04.218 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:33:04.218 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:04.218 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:04.219 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:04.219 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:04.219 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:04.219 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:04.219 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:04.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:04.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.622 ms 00:33:04.480 00:33:04.480 --- 10.0.0.2 ping statistics --- 00:33:04.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.480 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:04.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:04.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:33:04.480 00:33:04.480 --- 10.0.0.1 ping statistics --- 00:33:04.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:04.480 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:04.480 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:04.481 only one NIC for nvmf test 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:04.481 04:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:04.481 rmmod nvme_tcp 00:33:04.481 rmmod nvme_fabrics 00:33:04.481 rmmod nvme_keyring 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:04.481 04:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:07.026 00:33:07.026 real 0m9.569s 00:33:07.026 user 0m1.993s 00:33:07.026 sys 0m5.516s 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:07.026 ************************************ 00:33:07.026 END TEST nvmf_target_multipath 00:33:07.026 ************************************ 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:07.026 ************************************ 00:33:07.026 START TEST nvmf_zcopy 00:33:07.026 ************************************ 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:07.026 * Looking for test storage... 00:33:07.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.026 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:07.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.027 --rc genhtml_branch_coverage=1 00:33:07.027 --rc genhtml_function_coverage=1 00:33:07.027 --rc genhtml_legend=1 00:33:07.027 --rc geninfo_all_blocks=1 00:33:07.027 --rc geninfo_unexecuted_blocks=1 00:33:07.027 00:33:07.027 ' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:07.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.027 --rc genhtml_branch_coverage=1 00:33:07.027 --rc genhtml_function_coverage=1 00:33:07.027 --rc genhtml_legend=1 00:33:07.027 --rc geninfo_all_blocks=1 00:33:07.027 --rc geninfo_unexecuted_blocks=1 00:33:07.027 00:33:07.027 ' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:07.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.027 --rc genhtml_branch_coverage=1 00:33:07.027 --rc genhtml_function_coverage=1 00:33:07.027 --rc genhtml_legend=1 00:33:07.027 --rc geninfo_all_blocks=1 00:33:07.027 --rc geninfo_unexecuted_blocks=1 00:33:07.027 00:33:07.027 ' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:07.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.027 --rc genhtml_branch_coverage=1 00:33:07.027 --rc genhtml_function_coverage=1 00:33:07.027 --rc genhtml_legend=1 00:33:07.027 --rc geninfo_all_blocks=1 00:33:07.027 --rc geninfo_unexecuted_blocks=1 00:33:07.027 00:33:07.027 ' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.027 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.028 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.028 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:07.028 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:07.028 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:33:07.028 04:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:15.178 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:15.178 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.178 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:15.179 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:15.179 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:15.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:15.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.640 ms 00:33:15.179 00:33:15.179 --- 10.0.0.2 ping statistics --- 00:33:15.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.179 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:15.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:15.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:33:15.179 00:33:15.179 --- 10.0.0.1 ping statistics --- 00:33:15.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:15.179 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=3245693 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 3245693 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 3245693 ']' 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:15.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:15.179 04:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.179 [2024-11-05 04:44:27.919081] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:15.179 [2024-11-05 04:44:27.920215] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:33:15.179 [2024-11-05 04:44:27.920267] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:15.179 [2024-11-05 04:44:28.020760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.179 [2024-11-05 04:44:28.073192] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:15.179 [2024-11-05 04:44:28.073250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:15.179 [2024-11-05 04:44:28.073259] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:15.179 [2024-11-05 04:44:28.073266] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:15.179 [2024-11-05 04:44:28.073272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:15.179 [2024-11-05 04:44:28.074023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.179 [2024-11-05 04:44:28.149831] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:15.179 [2024-11-05 04:44:28.150124] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.179 [2024-11-05 04:44:28.782874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:15.179 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.180 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.180 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.180 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.180 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.180 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.180 [2024-11-05 04:44:28.811179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.441 malloc0 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:15.441 { 00:33:15.441 "params": { 00:33:15.441 "name": "Nvme$subsystem", 00:33:15.441 "trtype": "$TEST_TRANSPORT", 00:33:15.441 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:15.441 "adrfam": "ipv4", 00:33:15.441 "trsvcid": "$NVMF_PORT", 00:33:15.441 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:15.441 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:15.441 "hdgst": ${hdgst:-false}, 00:33:15.441 "ddgst": ${ddgst:-false} 00:33:15.441 }, 00:33:15.441 "method": "bdev_nvme_attach_controller" 00:33:15.441 } 00:33:15.441 EOF 00:33:15.441 )") 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:15.441 04:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:15.441 "params": { 00:33:15.441 "name": "Nvme1", 00:33:15.441 "trtype": "tcp", 00:33:15.441 "traddr": "10.0.0.2", 00:33:15.441 "adrfam": "ipv4", 00:33:15.441 "trsvcid": "4420", 00:33:15.441 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:15.441 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:15.441 "hdgst": false, 00:33:15.441 "ddgst": false 00:33:15.441 }, 00:33:15.441 "method": "bdev_nvme_attach_controller" 00:33:15.441 }' 00:33:15.441 [2024-11-05 04:44:28.917005] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:33:15.441 [2024-11-05 04:44:28.917071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3245807 ] 00:33:15.441 [2024-11-05 04:44:28.991400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.441 [2024-11-05 04:44:29.033414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.703 Running I/O for 10 seconds... 00:33:17.586 6426.00 IOPS, 50.20 MiB/s [2024-11-05T03:44:32.612Z] 6463.50 IOPS, 50.50 MiB/s [2024-11-05T03:44:33.554Z] 6480.33 IOPS, 50.63 MiB/s [2024-11-05T03:44:34.497Z] 6495.75 IOPS, 50.75 MiB/s [2024-11-05T03:44:35.438Z] 6925.20 IOPS, 54.10 MiB/s [2024-11-05T03:44:36.380Z] 7362.33 IOPS, 57.52 MiB/s [2024-11-05T03:44:37.322Z] 7672.43 IOPS, 59.94 MiB/s [2024-11-05T03:44:38.264Z] 7905.12 IOPS, 61.76 MiB/s [2024-11-05T03:44:39.649Z] 8085.33 IOPS, 63.17 MiB/s [2024-11-05T03:44:39.649Z] 8229.40 IOPS, 64.29 MiB/s 00:33:26.009 Latency(us) 00:33:26.009 [2024-11-05T03:44:39.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.009 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:26.009 Verification LBA range: start 0x0 length 0x1000 00:33:26.009 Nvme1n1 : 10.05 8200.40 64.07 0.00 0.00 15494.31 1966.08 42598.40 00:33:26.009 [2024-11-05T03:44:39.649Z] =================================================================================================================== 00:33:26.009 [2024-11-05T03:44:39.649Z] Total : 8200.40 64.07 0.00 0.00 15494.31 1966.08 42598.40 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3247810 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:26.009 { 00:33:26.009 "params": { 00:33:26.009 "name": "Nvme$subsystem", 00:33:26.009 "trtype": "$TEST_TRANSPORT", 00:33:26.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:26.009 "adrfam": "ipv4", 00:33:26.009 "trsvcid": "$NVMF_PORT", 00:33:26.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:26.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:26.009 "hdgst": ${hdgst:-false}, 00:33:26.009 "ddgst": ${ddgst:-false} 00:33:26.009 }, 00:33:26.009 "method": "bdev_nvme_attach_controller" 00:33:26.009 } 00:33:26.009 EOF 00:33:26.009 )") 00:33:26.009 [2024-11-05 04:44:39.410432] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.009 [2024-11-05 04:44:39.410463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:26.009 04:44:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:26.009 "params": { 00:33:26.009 "name": "Nvme1", 00:33:26.009 "trtype": "tcp", 00:33:26.009 "traddr": "10.0.0.2", 00:33:26.009 "adrfam": "ipv4", 00:33:26.009 "trsvcid": "4420", 00:33:26.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:26.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:26.010 "hdgst": false, 00:33:26.010 "ddgst": false 00:33:26.010 }, 00:33:26.010 "method": "bdev_nvme_attach_controller" 00:33:26.010 }' 00:33:26.010 [2024-11-05 04:44:39.422395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.422404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.434394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.434402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.446393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.446400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.458087] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:33:26.010 [2024-11-05 04:44:39.458133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247810 ] 00:33:26.010 [2024-11-05 04:44:39.458393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.458400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.470393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.470401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.482393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.482400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.494393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.494400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.506393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.506400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.518393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.518400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.527601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.010 [2024-11-05 04:44:39.530394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.530402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.542394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.542403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.554393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.554405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.562652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.010 [2024-11-05 04:44:39.566393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.566401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.578400] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.578411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.590396] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.590408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.602394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.602403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.614397] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.614409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.626394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.626401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.010 [2024-11-05 04:44:39.638403] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.010 [2024-11-05 04:44:39.638418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.650395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.650404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.662395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.662405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.674395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.674404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.686393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.686399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.698392] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.698399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.710393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.710399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.722394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.722404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.734393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.734401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.746392] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.746399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.758393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.758401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.770395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.770405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.782393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.782400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.794393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.794401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.806393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.806401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.854293] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.854307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.862397] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.862407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 Running I/O for 5 seconds... 00:33:26.271 [2024-11-05 04:44:39.878308] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.878325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.891395] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.891410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.271 [2024-11-05 04:44:39.905326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.271 [2024-11-05 04:44:39.905341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:39.918492] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:39.918509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:39.930862] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:39.930878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:39.945417] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:39.945433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:39.958662] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:39.958677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:39.973558] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:39.973573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:39.986326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:39.986341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:39.998326] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:39.998341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.013376] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:40.013393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.026407] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:40.026423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.038126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:40.038141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.050796] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:40.050810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.065822] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:40.065837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.078725] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:40.078739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.093686] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:40.093701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.106623] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:40.106638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.119392] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.532 [2024-11-05 04:44:40.119406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.532 [2024-11-05 04:44:40.133409] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.533 [2024-11-05 04:44:40.133424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.533 [2024-11-05 04:44:40.146108] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.533 [2024-11-05 04:44:40.146124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.533 [2024-11-05 04:44:40.158926] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.533 [2024-11-05 04:44:40.158940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.794 [2024-11-05 04:44:40.173576] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.794 [2024-11-05 04:44:40.173592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.794 [2024-11-05 04:44:40.186397] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.186412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.199068] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.199083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.214146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.214161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.226991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.227006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.241710] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.241725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.254385] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.254400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.267154] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.267169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.281744] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.281763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.294658] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.294672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.309735] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.309754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.322434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.322450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.334686] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.334700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.349715] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.349730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.362452] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.362467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.374483] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.374498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.387403] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.387418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.401788] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.401804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.414976] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.414991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:26.795 [2024-11-05 04:44:40.430155] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:26.795 [2024-11-05 04:44:40.430170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.443001] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.443016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.457414] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.457429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.470361] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.470376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.482950] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.482964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.497497] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.497513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.510410] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.510425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.522551] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.522566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.537448] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.537463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.550936] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.550951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.565799] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.565821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.578863] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.578877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.593819] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.593834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.606160] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.606174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.619137] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.619151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.634168] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.634183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.646899] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.646913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.660783] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.660798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.673856] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.673871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.056 [2024-11-05 04:44:40.686439] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.056 [2024-11-05 04:44:40.686454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.699430] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.699445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.714586] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.714600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.729523] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.729538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.742550] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.742563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.757654] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.757668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.771054] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.771069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.785896] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.785910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.798458] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.798473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.810820] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.810834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.825602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.825621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.838171] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.838186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.850454] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.850469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.862414] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.862429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 18891.00 IOPS, 147.59 MiB/s [2024-11-05T03:44:40.956Z] [2024-11-05 04:44:40.874708] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.874722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.889843] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.889857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.902962] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.902976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.917502] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.917516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.930725] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.930738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.316 [2024-11-05 04:44:40.945802] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.316 [2024-11-05 04:44:40.945817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:40.958481] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:40.958496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:40.970826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:40.970840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:40.985670] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:40.985685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:40.998483] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:40.998498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.010799] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.010813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.025414] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.025429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.038392] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.038407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.051235] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.051250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.066138] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.066153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.079087] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.079106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.093455] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.093470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.106098] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.106113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.118838] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.118852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.133587] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.133601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.146405] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.146420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.159075] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.159089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.174009] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.174024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.186940] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.186954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.201544] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.201558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.577 [2024-11-05 04:44:41.214379] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.577 [2024-11-05 04:44:41.214393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.226920] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.226935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.241942] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.241956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.254424] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.254439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.267181] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.267195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.281439] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.281453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.293991] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.294006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.306879] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.306892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.321494] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.321509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.334493] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.334509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.346734] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.346752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.361652] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.361667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.374258] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.374272] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.386909] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.386923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.401335] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.401349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.414154] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.414169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.426971] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.426986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.441759] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.441774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.454359] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.454374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:27.838 [2024-11-05 04:44:41.467010] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:27.838 [2024-11-05 04:44:41.467024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.481306] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.481321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.494012] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.494027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.506724] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.506738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.521762] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.521777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.534718] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.534733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.549442] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.549457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.562270] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.562285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.574286] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.574301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.587219] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.587234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.601565] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.601581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.614544] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.614558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.629168] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.629183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.642153] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.099 [2024-11-05 04:44:41.642168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.099 [2024-11-05 04:44:41.655381] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.100 [2024-11-05 04:44:41.655395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.100 [2024-11-05 04:44:41.669086] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.100 [2024-11-05 04:44:41.669101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.100 [2024-11-05 04:44:41.681812] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.100 [2024-11-05 04:44:41.681826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.100 [2024-11-05 04:44:41.694799] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.100 [2024-11-05 04:44:41.694813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.100 [2024-11-05 04:44:41.709916] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.100 [2024-11-05 04:44:41.709930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.100 [2024-11-05 04:44:41.722927] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.100 [2024-11-05 04:44:41.722941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.737721] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.737736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.750823] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.750837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.765508] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.765523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.778302] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.778318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.790742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.790761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.805285] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.805300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.817531] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.817546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.830690] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.830705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.845316] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.845331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.858307] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.858322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.870780] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.870794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 18938.50 IOPS, 147.96 MiB/s [2024-11-05T03:44:42.001Z] [2024-11-05 04:44:41.885673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.885688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.898458] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.898473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.910924] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.910938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.925508] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.925524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.938550] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.938564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.953478] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.953493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.966178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.966193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.978713] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.978727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.361 [2024-11-05 04:44:41.993246] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.361 [2024-11-05 04:44:41.993261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.005918] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.005934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.018218] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.018233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.031130] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.031145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.045673] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.045688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.059091] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.059106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.073696] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.073711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.086507] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.086527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.098056] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.098071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.110836] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.110850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.125662] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.125678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.138933] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.138948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.153609] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.153624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.166293] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.166309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.178781] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.178796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.194107] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.194122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.206554] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.206568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.221017] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.221032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.233867] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.233882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.623 [2024-11-05 04:44:42.246678] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.623 [2024-11-05 04:44:42.246692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.261381] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.261397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.274150] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.274165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.287159] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.287174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.302098] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.302114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.314769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.314784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.329564] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.329578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.342465] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.342483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.355173] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.355188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.369134] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.369148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.381878] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.381893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.394729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.394743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.409936] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.409951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.422795] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.422810] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.437481] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.437496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.450769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.450784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.465695] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.465710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.479096] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.479110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.493848] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.493863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.506642] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.506656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:28.884 [2024-11-05 04:44:42.521453] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:28.884 [2024-11-05 04:44:42.521468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.534500] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.534515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.547257] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.547271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.561959] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.561974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.574651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.574665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.589727] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.589742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.602782] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.602799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.617757] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.617772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.630777] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.630791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.645632] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.645647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.658543] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.658557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.673927] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.673942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.686729] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.686743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.701856] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.701871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.714815] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.714829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.729885] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.729900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.743020] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.743034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.757767] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.757782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.770383] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.770398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.146 [2024-11-05 04:44:42.782695] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.146 [2024-11-05 04:44:42.782708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.797531] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.797546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.810023] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.810037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.822859] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.822873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.837537] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.837552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.850640] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.850654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.865799] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.865818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 18939.67 IOPS, 147.97 MiB/s [2024-11-05T03:44:43.048Z] [2024-11-05 04:44:42.878185] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.878199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.891280] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.891294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.905898] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.905912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.918702] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.918716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.934115] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.934130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.946826] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.946840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.961918] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.961932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.974546] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.974560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:42.989242] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:42.989257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:43.002172] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:43.002187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:43.015379] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:43.015394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:43.029390] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:43.029405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.408 [2024-11-05 04:44:43.041892] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.408 [2024-11-05 04:44:43.041906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.054194] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.054209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.067126] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.067142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.081651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.081666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.094178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.094193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.106993] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.107008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.121881] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.121896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.134663] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.134677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.149676] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.149690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.162686] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.162700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.177187] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.177202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.190525] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.190539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.669 [2024-11-05 04:44:43.202977] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.669 [2024-11-05 04:44:43.202990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.670 [2024-11-05 04:44:43.217567] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.670 [2024-11-05 04:44:43.217582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.670 [2024-11-05 04:44:43.231030] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.670 [2024-11-05 04:44:43.231045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.670 [2024-11-05 04:44:43.246146] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.670 [2024-11-05 04:44:43.246161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.670 [2024-11-05 04:44:43.258636] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.670 [2024-11-05 04:44:43.258651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.670 [2024-11-05 04:44:43.270339] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.670 [2024-11-05 04:44:43.270356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.670 [2024-11-05 04:44:43.283066] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.670 [2024-11-05 04:44:43.283081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.670 [2024-11-05 04:44:43.298005] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.670 [2024-11-05 04:44:43.298021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.310568] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.310583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.322639] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.322652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.337692] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.337707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.350607] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.350622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.362951] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.362966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.377237] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.377252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.390568] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.390583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.403398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.403412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.417992] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.418007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.430478] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.430493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.441737] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.441757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.454613] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.454628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.466924] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.466938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.481499] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.481514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.494434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.494449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.506774] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.506790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.521618] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.521633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.534740] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.534762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.549481] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.549496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:29.931 [2024-11-05 04:44:43.562793] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:29.931 [2024-11-05 04:44:43.562807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.205 [2024-11-05 04:44:43.577426] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.205 [2024-11-05 04:44:43.577442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.205 [2024-11-05 04:44:43.590540] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.205 [2024-11-05 04:44:43.590555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.205 [2024-11-05 04:44:43.605164] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.205 [2024-11-05 04:44:43.605179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.205 [2024-11-05 04:44:43.618073] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.205 [2024-11-05 04:44:43.618093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.205 [2024-11-05 04:44:43.630742] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.630761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.645954] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.645969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.658769] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.658784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.673772] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.673787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.686659] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.686673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.701913] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.701928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.714496] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.714511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.727198] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.727213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.741638] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.741653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.754912] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.754926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.769508] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.769523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.206 [2024-11-05 04:44:43.782302] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.206 [2024-11-05 04:44:43.782317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.207 [2024-11-05 04:44:43.795254] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.207 [2024-11-05 04:44:43.795269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.207 [2024-11-05 04:44:43.810348] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.207 [2024-11-05 04:44:43.810363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.207 [2024-11-05 04:44:43.823125] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.207 [2024-11-05 04:44:43.823141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.207 [2024-11-05 04:44:43.837951] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.207 [2024-11-05 04:44:43.837966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.472 [2024-11-05 04:44:43.850513] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.850528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.865511] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.865526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.878286] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.878305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 18939.50 IOPS, 147.96 MiB/s [2024-11-05T03:44:44.113Z] [2024-11-05 04:44:43.891017] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.891031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.905985] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.906000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.918434] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.918448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.931282] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.931297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.945807] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.945822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.959015] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.959029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.973120] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.973135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.986287] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.986302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:43.998739] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:43.998758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:44.013620] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:44.013635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:44.026792] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:44.026808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:44.041102] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:44.041117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:44.053816] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:44.053831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:44.066348] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:44.066364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:44.079185] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:44.079199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:44.094015] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:44.094030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.473 [2024-11-05 04:44:44.106784] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.473 [2024-11-05 04:44:44.106798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.121547] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.121562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.134109] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.134128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.147134] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.147149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.161749] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.161764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.174804] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.174818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.189827] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.189842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.202733] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.202752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.217703] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.217718] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.230705] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.230719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.245845] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.245860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.258643] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.258657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.273425] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.273440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.286250] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.286265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.298238] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.298253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.311337] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.311351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.325722] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.325737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.338383] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.338397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.350946] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.350960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.734 [2024-11-05 04:44:44.365657] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.734 [2024-11-05 04:44:44.365671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.378319] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.378334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.390725] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.390739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.405549] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.405563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.418428] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.418442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.429871] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.429886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.442473] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.442488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.455173] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.455187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.469806] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.469821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.482802] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.482816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.497933] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.497948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.511150] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.511164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.525777] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.525792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.538505] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.538519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.550094] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.550108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.563021] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.563036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.578178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.578192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.591053] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.591067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.606139] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.606154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:30.995 [2024-11-05 04:44:44.618722] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:30.995 [2024-11-05 04:44:44.618735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.633378] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.633392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.646602] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.646617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.658402] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.658417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.671540] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.671554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.685873] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.685887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.698903] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.698918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.713651] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.713666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.726467] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.726481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.739178] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.739192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.753579] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.753593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.766302] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.766316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.778691] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.778705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.793700] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.793714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.806488] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.806503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.818591] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.818606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.830961] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.830976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.845617] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.845632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.858723] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.858737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 [2024-11-05 04:44:44.873733] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.873751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 18941.80 IOPS, 147.98 MiB/s [2024-11-05T03:44:44.896Z] [2024-11-05 04:44:44.885184] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.256 [2024-11-05 04:44:44.885202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.256 00:33:31.256 Latency(us) 00:33:31.257 [2024-11-05T03:44:44.897Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.257 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:31.257 Nvme1n1 : 5.01 18941.68 147.98 0.00 0.00 6750.37 2553.17 12014.93 00:33:31.257 [2024-11-05T03:44:44.897Z] =================================================================================================================== 00:33:31.257 [2024-11-05T03:44:44.897Z] Total : 18941.68 147.98 0.00 0.00 6750.37 2553.17 12014.93 00:33:31.517 [2024-11-05 04:44:44.894396] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:44.894410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 [2024-11-05 04:44:44.906401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:44.906416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 [2024-11-05 04:44:44.918401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:44.918413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 [2024-11-05 04:44:44.930401] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:44.930414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 [2024-11-05 04:44:44.942398] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:44.942408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 [2024-11-05 04:44:44.954394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:44.954403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 [2024-11-05 04:44:44.966394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:44.966401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 [2024-11-05 04:44:44.978397] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:44.978407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 [2024-11-05 04:44:44.990394] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:44.990401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 [2024-11-05 04:44:45.002393] subsystem.c:2124:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:31.517 [2024-11-05 04:44:45.002400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:31.517 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3247810) - No such process 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3247810 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.517 delay0 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:31.517 04:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:31.517 [2024-11-05 04:44:45.110519] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:39.655 Initializing NVMe Controllers 00:33:39.655 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:39.655 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:39.655 Initialization complete. Launching workers. 00:33:39.655 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 7527 00:33:39.655 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 7810, failed to submit 37 00:33:39.655 success 7644, unsuccessful 166, failed 0 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:39.655 rmmod nvme_tcp 00:33:39.655 rmmod nvme_fabrics 00:33:39.655 rmmod nvme_keyring 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 3245693 ']' 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 3245693 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 3245693 ']' 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 3245693 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3245693 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3245693' 00:33:39.655 killing process with pid 3245693 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 3245693 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 3245693 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:39.655 04:44:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.038 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:41.038 00:33:41.038 real 0m34.239s 00:33:41.038 user 0m44.265s 00:33:41.038 sys 0m11.971s 00:33:41.038 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:41.038 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:41.038 ************************************ 00:33:41.038 END TEST nvmf_zcopy 00:33:41.038 ************************************ 00:33:41.038 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:41.038 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:41.038 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:41.038 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:41.038 ************************************ 00:33:41.038 START TEST nvmf_nmic 00:33:41.038 ************************************ 00:33:41.038 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:41.300 * Looking for test storage... 00:33:41.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:41.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.300 --rc genhtml_branch_coverage=1 00:33:41.300 --rc genhtml_function_coverage=1 00:33:41.300 --rc genhtml_legend=1 00:33:41.300 --rc geninfo_all_blocks=1 00:33:41.300 --rc geninfo_unexecuted_blocks=1 00:33:41.300 00:33:41.300 ' 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:41.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.300 --rc genhtml_branch_coverage=1 00:33:41.300 --rc genhtml_function_coverage=1 00:33:41.300 --rc genhtml_legend=1 00:33:41.300 --rc geninfo_all_blocks=1 00:33:41.300 --rc geninfo_unexecuted_blocks=1 00:33:41.300 00:33:41.300 ' 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:41.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.300 --rc genhtml_branch_coverage=1 00:33:41.300 --rc genhtml_function_coverage=1 00:33:41.300 --rc genhtml_legend=1 00:33:41.300 --rc geninfo_all_blocks=1 00:33:41.300 --rc geninfo_unexecuted_blocks=1 00:33:41.300 00:33:41.300 ' 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:41.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:41.300 --rc genhtml_branch_coverage=1 00:33:41.300 --rc genhtml_function_coverage=1 00:33:41.300 --rc genhtml_legend=1 00:33:41.300 --rc geninfo_all_blocks=1 00:33:41.300 --rc geninfo_unexecuted_blocks=1 00:33:41.300 00:33:41.300 ' 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:41.300 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:41.301 04:44:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:49.442 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:49.443 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:49.443 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:49.443 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:49.443 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:49.443 04:45:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:49.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:49.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:33:49.443 00:33:49.443 --- 10.0.0.2 ping statistics --- 00:33:49.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.443 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:49.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:49.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:33:49.443 00:33:49.443 --- 10.0.0.1 ping statistics --- 00:33:49.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:49.443 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=3254570 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 3254570 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 3254570 ']' 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:49.443 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 [2024-11-05 04:45:02.150559] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:49.444 [2024-11-05 04:45:02.151418] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:33:49.444 [2024-11-05 04:45:02.151460] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.444 [2024-11-05 04:45:02.222636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:49.444 [2024-11-05 04:45:02.260788] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.444 [2024-11-05 04:45:02.260822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.444 [2024-11-05 04:45:02.260830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.444 [2024-11-05 04:45:02.260836] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.444 [2024-11-05 04:45:02.260842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.444 [2024-11-05 04:45:02.262318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:49.444 [2024-11-05 04:45:02.262435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:49.444 [2024-11-05 04:45:02.262592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.444 [2024-11-05 04:45:02.262593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:49.444 [2024-11-05 04:45:02.317498] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:49.444 [2024-11-05 04:45:02.317827] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:49.444 [2024-11-05 04:45:02.318694] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:49.444 [2024-11-05 04:45:02.318861] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:49.444 [2024-11-05 04:45:02.319049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 [2024-11-05 04:45:02.403033] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 Malloc0 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 [2024-11-05 04:45:02.475252] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:49.444 test case1: single bdev can't be used in multiple subsystems 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 [2024-11-05 04:45:02.510959] bdev.c:8192:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:49.444 [2024-11-05 04:45:02.510979] subsystem.c:2151:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:49.444 [2024-11-05 04:45:02.510987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:49.444 request: 00:33:49.444 { 00:33:49.444 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:49.444 "namespace": { 00:33:49.444 "bdev_name": "Malloc0", 00:33:49.444 "no_auto_visible": false 00:33:49.444 }, 00:33:49.444 "method": "nvmf_subsystem_add_ns", 00:33:49.444 "req_id": 1 00:33:49.444 } 00:33:49.444 Got JSON-RPC error response 00:33:49.444 response: 00:33:49.444 { 00:33:49.444 "code": -32602, 00:33:49.444 "message": "Invalid parameters" 00:33:49.444 } 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:49.444 Adding namespace failed - expected result. 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:49.444 test case2: host connect to nvmf target in multiple paths 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:49.444 [2024-11-05 04:45:02.523085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:49.444 04:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:49.706 04:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:49.706 04:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:33:49.706 04:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:49.706 04:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:33:49.706 04:45:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:33:51.620 04:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:33:51.620 04:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:33:51.620 04:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:33:51.913 04:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:33:51.913 04:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:33:51.913 04:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:33:51.913 04:45:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:51.913 [global] 00:33:51.913 thread=1 00:33:51.913 invalidate=1 00:33:51.913 rw=write 00:33:51.913 time_based=1 00:33:51.913 runtime=1 00:33:51.913 ioengine=libaio 00:33:51.913 direct=1 00:33:51.913 bs=4096 00:33:51.913 iodepth=1 00:33:51.913 norandommap=0 00:33:51.913 numjobs=1 00:33:51.913 00:33:51.913 verify_dump=1 00:33:51.913 verify_backlog=512 00:33:51.913 verify_state_save=0 00:33:51.913 do_verify=1 00:33:51.913 verify=crc32c-intel 00:33:51.913 [job0] 00:33:51.913 filename=/dev/nvme0n1 00:33:51.913 Could not set queue depth (nvme0n1) 00:33:52.176 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:52.176 fio-3.35 00:33:52.176 Starting 1 thread 00:33:53.560 00:33:53.560 job0: (groupid=0, jobs=1): err= 0: pid=3255414: Tue Nov 5 04:45:06 2024 00:33:53.560 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:33:53.560 slat (nsec): min=7556, max=60383, avg=25487.81, stdev=2967.74 00:33:53.560 clat (usec): min=748, max=1182, avg=976.27, stdev=66.26 00:33:53.560 lat (usec): min=773, max=1207, avg=1001.76, stdev=66.63 00:33:53.560 clat percentiles (usec): 00:33:53.560 | 1.00th=[ 799], 5.00th=[ 848], 10.00th=[ 881], 20.00th=[ 938], 00:33:53.561 | 30.00th=[ 963], 40.00th=[ 971], 50.00th=[ 988], 60.00th=[ 996], 00:33:53.561 | 70.00th=[ 1004], 80.00th=[ 1020], 90.00th=[ 1057], 95.00th=[ 1074], 00:33:53.561 | 99.00th=[ 1106], 99.50th=[ 1172], 99.90th=[ 1188], 99.95th=[ 1188], 00:33:53.561 | 99.99th=[ 1188] 00:33:53.561 write: IOPS=755, BW=3021KiB/s (3093kB/s)(3024KiB/1001msec); 0 zone resets 00:33:53.561 slat (nsec): min=9442, max=67851, avg=28280.26, stdev=9927.14 00:33:53.561 clat (usec): min=234, max=821, avg=603.88, stdev=98.89 00:33:53.561 lat (usec): min=245, max=853, avg=632.17, stdev=104.01 00:33:53.561 clat percentiles (usec): 00:33:53.561 | 1.00th=[ 367], 5.00th=[ 404], 10.00th=[ 457], 20.00th=[ 515], 00:33:53.561 | 30.00th=[ 570], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 644], 00:33:53.561 | 70.00th=[ 668], 80.00th=[ 693], 90.00th=[ 717], 95.00th=[ 742], 00:33:53.561 | 99.00th=[ 775], 99.50th=[ 775], 99.90th=[ 824], 99.95th=[ 824], 00:33:53.561 | 99.99th=[ 824] 00:33:53.561 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:53.561 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:53.561 lat (usec) : 250=0.08%, 500=10.41%, 750=47.08%, 1000=27.68% 00:33:53.561 lat (msec) : 2=14.75% 00:33:53.561 cpu : usr=1.80%, sys=3.60%, ctx=1268, majf=0, minf=1 00:33:53.561 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:53.561 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.561 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.561 issued rwts: total=512,756,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.561 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:53.561 00:33:53.561 Run status group 0 (all jobs): 00:33:53.561 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:33:53.561 WRITE: bw=3021KiB/s (3093kB/s), 3021KiB/s-3021KiB/s (3093kB/s-3093kB/s), io=3024KiB (3097kB), run=1001-1001msec 00:33:53.561 00:33:53.561 Disk stats (read/write): 00:33:53.561 nvme0n1: ios=562/593, merge=0/0, ticks=553/348, in_queue=901, util=93.39% 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:53.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:53.561 04:45:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:53.561 rmmod nvme_tcp 00:33:53.561 rmmod nvme_fabrics 00:33:53.561 rmmod nvme_keyring 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 3254570 ']' 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 3254570 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 3254570 ']' 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 3254570 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3254570 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3254570' 00:33:53.561 killing process with pid 3254570 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 3254570 00:33:53.561 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 3254570 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:53.822 04:45:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:55.735 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:55.735 00:33:55.735 real 0m14.748s 00:33:55.735 user 0m32.643s 00:33:55.735 sys 0m7.183s 00:33:55.735 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:55.735 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:55.735 ************************************ 00:33:55.735 END TEST nvmf_nmic 00:33:55.735 ************************************ 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:55.996 ************************************ 00:33:55.996 START TEST nvmf_fio_target 00:33:55.996 ************************************ 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:55.996 * Looking for test storage... 00:33:55.996 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:55.996 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:55.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.997 --rc genhtml_branch_coverage=1 00:33:55.997 --rc genhtml_function_coverage=1 00:33:55.997 --rc genhtml_legend=1 00:33:55.997 --rc geninfo_all_blocks=1 00:33:55.997 --rc geninfo_unexecuted_blocks=1 00:33:55.997 00:33:55.997 ' 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:55.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.997 --rc genhtml_branch_coverage=1 00:33:55.997 --rc genhtml_function_coverage=1 00:33:55.997 --rc genhtml_legend=1 00:33:55.997 --rc geninfo_all_blocks=1 00:33:55.997 --rc geninfo_unexecuted_blocks=1 00:33:55.997 00:33:55.997 ' 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:55.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.997 --rc genhtml_branch_coverage=1 00:33:55.997 --rc genhtml_function_coverage=1 00:33:55.997 --rc genhtml_legend=1 00:33:55.997 --rc geninfo_all_blocks=1 00:33:55.997 --rc geninfo_unexecuted_blocks=1 00:33:55.997 00:33:55.997 ' 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:55.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:55.997 --rc genhtml_branch_coverage=1 00:33:55.997 --rc genhtml_function_coverage=1 00:33:55.997 --rc genhtml_legend=1 00:33:55.997 --rc geninfo_all_blocks=1 00:33:55.997 --rc geninfo_unexecuted_blocks=1 00:33:55.997 00:33:55.997 ' 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:55.997 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:56.258 04:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:04.411 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:04.412 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:04.412 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:04.412 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:04.412 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:04.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.577 ms 00:34:04.412 00:34:04.412 --- 10.0.0.2 ping statistics --- 00:34:04.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.412 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:04.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:34:04.412 00:34:04.412 --- 10.0.0.1 ping statistics --- 00:34:04.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.412 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:04.412 04:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=3260239 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 3260239 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 3260239 ']' 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:04.412 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.413 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:04.413 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.413 [2024-11-05 04:45:17.063135] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:04.413 [2024-11-05 04:45:17.064130] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:34:04.413 [2024-11-05 04:45:17.064174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.413 [2024-11-05 04:45:17.142560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:04.413 [2024-11-05 04:45:17.180480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.413 [2024-11-05 04:45:17.180513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.413 [2024-11-05 04:45:17.180521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.413 [2024-11-05 04:45:17.180528] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.413 [2024-11-05 04:45:17.180534] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.413 [2024-11-05 04:45:17.182057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.413 [2024-11-05 04:45:17.182173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:04.413 [2024-11-05 04:45:17.182329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.413 [2024-11-05 04:45:17.182330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:04.413 [2024-11-05 04:45:17.237102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:04.413 [2024-11-05 04:45:17.237728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:04.413 [2024-11-05 04:45:17.238045] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:04.413 [2024-11-05 04:45:17.238470] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:04.413 [2024-11-05 04:45:17.238667] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:04.413 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:04.413 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:34:04.413 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:04.413 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:04.413 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:04.413 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.413 04:45:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:04.413 [2024-11-05 04:45:18.042878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.674 04:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:04.674 04:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:04.674 04:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:04.936 04:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:04.936 04:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:05.198 04:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:05.198 04:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:05.459 04:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:05.459 04:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:05.459 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:05.719 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:05.719 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:05.719 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:05.981 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:05.981 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:05.981 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:06.242 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:06.242 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:06.242 04:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:06.503 04:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:06.503 04:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:06.764 04:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.764 [2024-11-05 04:45:20.346981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.764 04:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:07.025 04:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:07.285 04:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:07.546 04:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:07.546 04:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:34:07.546 04:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:07.546 04:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:34:07.546 04:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:34:07.546 04:45:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:34:10.089 04:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:10.089 04:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:10.089 04:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:10.089 04:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:34:10.089 04:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:10.089 04:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:34:10.089 04:45:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:10.089 [global] 00:34:10.089 thread=1 00:34:10.089 invalidate=1 00:34:10.089 rw=write 00:34:10.089 time_based=1 00:34:10.089 runtime=1 00:34:10.089 ioengine=libaio 00:34:10.089 direct=1 00:34:10.089 bs=4096 00:34:10.089 iodepth=1 00:34:10.089 norandommap=0 00:34:10.089 numjobs=1 00:34:10.089 00:34:10.089 verify_dump=1 00:34:10.089 verify_backlog=512 00:34:10.089 verify_state_save=0 00:34:10.089 do_verify=1 00:34:10.089 verify=crc32c-intel 00:34:10.089 [job0] 00:34:10.089 filename=/dev/nvme0n1 00:34:10.089 [job1] 00:34:10.089 filename=/dev/nvme0n2 00:34:10.089 [job2] 00:34:10.089 filename=/dev/nvme0n3 00:34:10.089 [job3] 00:34:10.089 filename=/dev/nvme0n4 00:34:10.089 Could not set queue depth (nvme0n1) 00:34:10.089 Could not set queue depth (nvme0n2) 00:34:10.089 Could not set queue depth (nvme0n3) 00:34:10.089 Could not set queue depth (nvme0n4) 00:34:10.089 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:10.089 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:10.089 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:10.089 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:10.089 fio-3.35 00:34:10.089 Starting 4 threads 00:34:11.472 00:34:11.472 job0: (groupid=0, jobs=1): err= 0: pid=3261676: Tue Nov 5 04:45:24 2024 00:34:11.472 read: IOPS=252, BW=1009KiB/s (1033kB/s)(1020KiB/1011msec) 00:34:11.472 slat (nsec): min=7471, max=45412, avg=26172.57, stdev=2559.90 00:34:11.472 clat (usec): min=800, max=42089, avg=2612.96, stdev=7880.92 00:34:11.472 lat (usec): min=826, max=42114, avg=2639.13, stdev=7880.90 00:34:11.472 clat percentiles (usec): 00:34:11.472 | 1.00th=[ 865], 5.00th=[ 898], 10.00th=[ 930], 20.00th=[ 971], 00:34:11.472 | 30.00th=[ 996], 40.00th=[ 1012], 50.00th=[ 1029], 60.00th=[ 1045], 00:34:11.472 | 70.00th=[ 1057], 80.00th=[ 1090], 90.00th=[ 1139], 95.00th=[ 1188], 00:34:11.472 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:11.472 | 99.99th=[42206] 00:34:11.472 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:34:11.472 slat (nsec): min=3950, max=74298, avg=28124.71, stdev=11252.51 00:34:11.472 clat (usec): min=156, max=1063, avg=620.46, stdev=131.20 00:34:11.472 lat (usec): min=166, max=1097, avg=648.58, stdev=135.27 00:34:11.472 clat percentiles (usec): 00:34:11.472 | 1.00th=[ 297], 5.00th=[ 396], 10.00th=[ 453], 20.00th=[ 510], 00:34:11.472 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 619], 60.00th=[ 652], 00:34:11.472 | 70.00th=[ 685], 80.00th=[ 725], 90.00th=[ 783], 95.00th=[ 840], 00:34:11.472 | 99.00th=[ 930], 99.50th=[ 955], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:11.472 | 99.99th=[ 1057] 00:34:11.472 bw ( KiB/s): min= 4096, max= 4096, per=44.15%, avg=4096.00, stdev= 0.00, samples=1 00:34:11.472 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:11.472 lat (usec) : 250=0.13%, 500=11.73%, 750=45.11%, 1000=20.60% 00:34:11.472 lat (msec) : 2=21.12%, 50=1.30% 00:34:11.472 cpu : usr=1.98%, sys=2.28%, ctx=767, majf=0, minf=1 00:34:11.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.473 issued rwts: total=255,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.473 job1: (groupid=0, jobs=1): err= 0: pid=3261696: Tue Nov 5 04:45:24 2024 00:34:11.473 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:11.473 slat (nsec): min=25250, max=44629, avg=26351.19, stdev=2138.84 00:34:11.473 clat (usec): min=732, max=1299, avg=1065.35, stdev=85.95 00:34:11.473 lat (usec): min=759, max=1325, avg=1091.70, stdev=85.91 00:34:11.473 clat percentiles (usec): 00:34:11.473 | 1.00th=[ 832], 5.00th=[ 906], 10.00th=[ 963], 20.00th=[ 996], 00:34:11.473 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:34:11.473 | 70.00th=[ 1106], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1188], 00:34:11.473 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1303], 99.95th=[ 1303], 00:34:11.473 | 99.99th=[ 1303] 00:34:11.473 write: IOPS=555, BW=2222KiB/s (2275kB/s)(2224KiB/1001msec); 0 zone resets 00:34:11.473 slat (usec): min=10, max=43154, avg=111.02, stdev=1828.76 00:34:11.473 clat (usec): min=278, max=947, avg=666.69, stdev=129.71 00:34:11.473 lat (usec): min=310, max=43872, avg=777.71, stdev=1835.70 00:34:11.473 clat percentiles (usec): 00:34:11.473 | 1.00th=[ 355], 5.00th=[ 449], 10.00th=[ 494], 20.00th=[ 562], 00:34:11.473 | 30.00th=[ 594], 40.00th=[ 635], 50.00th=[ 668], 60.00th=[ 717], 00:34:11.473 | 70.00th=[ 758], 80.00th=[ 791], 90.00th=[ 824], 95.00th=[ 857], 00:34:11.473 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 947], 99.95th=[ 947], 00:34:11.473 | 99.99th=[ 947] 00:34:11.473 bw ( KiB/s): min= 4096, max= 4096, per=44.15%, avg=4096.00, stdev= 0.00, samples=1 00:34:11.473 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:11.473 lat (usec) : 500=6.37%, 750=28.93%, 1000=26.40% 00:34:11.473 lat (msec) : 2=38.30% 00:34:11.473 cpu : usr=2.40%, sys=2.50%, ctx=1071, majf=0, minf=1 00:34:11.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.473 issued rwts: total=512,556,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.473 job2: (groupid=0, jobs=1): err= 0: pid=3261715: Tue Nov 5 04:45:24 2024 00:34:11.473 read: IOPS=370, BW=1483KiB/s (1518kB/s)(1484KiB/1001msec) 00:34:11.473 slat (nsec): min=27087, max=59127, avg=28181.79, stdev=3759.67 00:34:11.473 clat (usec): min=710, max=42062, avg=1728.99, stdev=5130.32 00:34:11.473 lat (usec): min=737, max=42089, avg=1757.18, stdev=5130.53 00:34:11.473 clat percentiles (usec): 00:34:11.473 | 1.00th=[ 816], 5.00th=[ 914], 10.00th=[ 955], 20.00th=[ 1004], 00:34:11.473 | 30.00th=[ 1029], 40.00th=[ 1057], 50.00th=[ 1074], 60.00th=[ 1090], 00:34:11.473 | 70.00th=[ 1123], 80.00th=[ 1156], 90.00th=[ 1205], 95.00th=[ 1237], 00:34:11.473 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:11.473 | 99.99th=[42206] 00:34:11.473 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:11.473 slat (nsec): min=9507, max=74622, avg=31780.60, stdev=10209.91 00:34:11.473 clat (usec): min=291, max=1134, avg=635.27, stdev=122.33 00:34:11.473 lat (usec): min=304, max=1173, avg=667.05, stdev=127.34 00:34:11.473 clat percentiles (usec): 00:34:11.473 | 1.00th=[ 355], 5.00th=[ 412], 10.00th=[ 469], 20.00th=[ 529], 00:34:11.473 | 30.00th=[ 586], 40.00th=[ 619], 50.00th=[ 644], 60.00th=[ 668], 00:34:11.473 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 775], 95.00th=[ 807], 00:34:11.473 | 99.00th=[ 930], 99.50th=[ 996], 99.90th=[ 1139], 99.95th=[ 1139], 00:34:11.473 | 99.99th=[ 1139] 00:34:11.473 bw ( KiB/s): min= 4096, max= 4096, per=44.15%, avg=4096.00, stdev= 0.00, samples=1 00:34:11.473 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:11.473 lat (usec) : 500=8.83%, 750=39.98%, 1000=16.65% 00:34:11.473 lat (msec) : 2=33.86%, 50=0.68% 00:34:11.473 cpu : usr=1.70%, sys=3.60%, ctx=885, majf=0, minf=1 00:34:11.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.473 issued rwts: total=371,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.473 job3: (groupid=0, jobs=1): err= 0: pid=3261722: Tue Nov 5 04:45:24 2024 00:34:11.473 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:11.473 slat (nsec): min=8986, max=47190, avg=28923.96, stdev=3164.28 00:34:11.473 clat (usec): min=629, max=1300, avg=967.36, stdev=126.99 00:34:11.473 lat (usec): min=658, max=1329, avg=996.28, stdev=127.08 00:34:11.473 clat percentiles (usec): 00:34:11.473 | 1.00th=[ 693], 5.00th=[ 758], 10.00th=[ 791], 20.00th=[ 840], 00:34:11.473 | 30.00th=[ 906], 40.00th=[ 938], 50.00th=[ 979], 60.00th=[ 1012], 00:34:11.473 | 70.00th=[ 1037], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1172], 00:34:11.473 | 99.00th=[ 1254], 99.50th=[ 1303], 99.90th=[ 1303], 99.95th=[ 1303], 00:34:11.473 | 99.99th=[ 1303] 00:34:11.473 write: IOPS=764, BW=3057KiB/s (3130kB/s)(3060KiB/1001msec); 0 zone resets 00:34:11.473 slat (nsec): min=9533, max=73121, avg=29689.93, stdev=11858.64 00:34:11.473 clat (usec): min=161, max=1191, avg=598.01, stdev=142.32 00:34:11.473 lat (usec): min=172, max=1201, avg=627.70, stdev=147.46 00:34:11.473 clat percentiles (usec): 00:34:11.473 | 1.00th=[ 281], 5.00th=[ 363], 10.00th=[ 400], 20.00th=[ 474], 00:34:11.473 | 30.00th=[ 519], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 652], 00:34:11.473 | 70.00th=[ 685], 80.00th=[ 717], 90.00th=[ 758], 95.00th=[ 807], 00:34:11.473 | 99.00th=[ 938], 99.50th=[ 1004], 99.90th=[ 1188], 99.95th=[ 1188], 00:34:11.473 | 99.99th=[ 1188] 00:34:11.473 bw ( KiB/s): min= 4096, max= 4096, per=44.15%, avg=4096.00, stdev= 0.00, samples=1 00:34:11.473 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:11.473 lat (usec) : 250=0.39%, 500=15.27%, 750=38.84%, 1000=27.56% 00:34:11.473 lat (msec) : 2=17.93% 00:34:11.473 cpu : usr=2.30%, sys=5.20%, ctx=1278, majf=0, minf=1 00:34:11.473 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.473 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.473 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.473 issued rwts: total=512,765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.473 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:11.473 00:34:11.473 Run status group 0 (all jobs): 00:34:11.473 READ: bw=6528KiB/s (6685kB/s), 1009KiB/s-2046KiB/s (1033kB/s-2095kB/s), io=6600KiB (6758kB), run=1001-1011msec 00:34:11.473 WRITE: bw=9278KiB/s (9501kB/s), 2026KiB/s-3057KiB/s (2074kB/s-3130kB/s), io=9380KiB (9605kB), run=1001-1011msec 00:34:11.473 00:34:11.473 Disk stats (read/write): 00:34:11.473 nvme0n1: ios=221/512, merge=0/0, ticks=511/258, in_queue=769, util=86.47% 00:34:11.473 nvme0n2: ios=434/512, merge=0/0, ticks=811/317, in_queue=1128, util=96.00% 00:34:11.473 nvme0n3: ios=311/512, merge=0/0, ticks=1366/266, in_queue=1632, util=95.96% 00:34:11.473 nvme0n4: ios=556/512, merge=0/0, ticks=893/265, in_queue=1158, util=100.00% 00:34:11.473 04:45:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:11.473 [global] 00:34:11.473 thread=1 00:34:11.473 invalidate=1 00:34:11.473 rw=randwrite 00:34:11.473 time_based=1 00:34:11.473 runtime=1 00:34:11.473 ioengine=libaio 00:34:11.473 direct=1 00:34:11.473 bs=4096 00:34:11.473 iodepth=1 00:34:11.473 norandommap=0 00:34:11.473 numjobs=1 00:34:11.473 00:34:11.473 verify_dump=1 00:34:11.473 verify_backlog=512 00:34:11.473 verify_state_save=0 00:34:11.473 do_verify=1 00:34:11.473 verify=crc32c-intel 00:34:11.473 [job0] 00:34:11.473 filename=/dev/nvme0n1 00:34:11.473 [job1] 00:34:11.473 filename=/dev/nvme0n2 00:34:11.473 [job2] 00:34:11.473 filename=/dev/nvme0n3 00:34:11.473 [job3] 00:34:11.473 filename=/dev/nvme0n4 00:34:11.473 Could not set queue depth (nvme0n1) 00:34:11.473 Could not set queue depth (nvme0n2) 00:34:11.473 Could not set queue depth (nvme0n3) 00:34:11.473 Could not set queue depth (nvme0n4) 00:34:11.733 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:11.734 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:11.734 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:11.734 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:11.734 fio-3.35 00:34:11.734 Starting 4 threads 00:34:13.248 00:34:13.248 job0: (groupid=0, jobs=1): err= 0: pid=3262149: Tue Nov 5 04:45:26 2024 00:34:13.248 read: IOPS=16, BW=65.4KiB/s (67.0kB/s)(68.0KiB/1039msec) 00:34:13.248 slat (nsec): min=9349, max=31086, avg=26741.06, stdev=4608.65 00:34:13.248 clat (usec): min=40940, max=42084, avg=41770.88, stdev=375.47 00:34:13.248 lat (usec): min=40970, max=42111, avg=41797.62, stdev=374.84 00:34:13.248 clat percentiles (usec): 00:34:13.248 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:13.248 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:13.248 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:13.248 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:13.248 | 99.99th=[42206] 00:34:13.248 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:34:13.248 slat (nsec): min=9041, max=59059, avg=30453.35, stdev=11262.29 00:34:13.248 clat (usec): min=228, max=921, avg=602.24, stdev=128.86 00:34:13.248 lat (usec): min=237, max=964, avg=632.70, stdev=134.61 00:34:13.248 clat percentiles (usec): 00:34:13.249 | 1.00th=[ 302], 5.00th=[ 363], 10.00th=[ 408], 20.00th=[ 498], 00:34:13.249 | 30.00th=[ 537], 40.00th=[ 586], 50.00th=[ 611], 60.00th=[ 644], 00:34:13.249 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 766], 95.00th=[ 816], 00:34:13.249 | 99.00th=[ 873], 99.50th=[ 898], 99.90th=[ 922], 99.95th=[ 922], 00:34:13.249 | 99.99th=[ 922] 00:34:13.249 bw ( KiB/s): min= 4096, max= 4096, per=44.39%, avg=4096.00, stdev= 0.00, samples=1 00:34:13.249 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:13.249 lat (usec) : 250=0.38%, 500=19.28%, 750=64.65%, 1000=12.48% 00:34:13.249 lat (msec) : 50=3.21% 00:34:13.249 cpu : usr=0.96%, sys=1.93%, ctx=531, majf=0, minf=1 00:34:13.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.249 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.249 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:13.249 job1: (groupid=0, jobs=1): err= 0: pid=3262166: Tue Nov 5 04:45:26 2024 00:34:13.249 read: IOPS=182, BW=729KiB/s (747kB/s)(732KiB/1004msec) 00:34:13.249 slat (nsec): min=9286, max=62581, avg=25292.07, stdev=4119.68 00:34:13.249 clat (usec): min=803, max=42128, avg=3749.26, stdev=9949.23 00:34:13.249 lat (usec): min=828, max=42154, avg=3774.55, stdev=9948.99 00:34:13.249 clat percentiles (usec): 00:34:13.249 | 1.00th=[ 807], 5.00th=[ 930], 10.00th=[ 955], 20.00th=[ 996], 00:34:13.249 | 30.00th=[ 1029], 40.00th=[ 1074], 50.00th=[ 1139], 60.00th=[ 1188], 00:34:13.249 | 70.00th=[ 1221], 80.00th=[ 1287], 90.00th=[ 1385], 95.00th=[41157], 00:34:13.249 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:13.249 | 99.99th=[42206] 00:34:13.249 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:34:13.249 slat (nsec): min=3335, max=73948, avg=10736.45, stdev=4706.14 00:34:13.249 clat (usec): min=161, max=1387, avg=594.00, stdev=157.10 00:34:13.249 lat (usec): min=165, max=1393, avg=604.74, stdev=158.01 00:34:13.249 clat percentiles (usec): 00:34:13.249 | 1.00th=[ 243], 5.00th=[ 347], 10.00th=[ 383], 20.00th=[ 469], 00:34:13.249 | 30.00th=[ 510], 40.00th=[ 562], 50.00th=[ 603], 60.00th=[ 644], 00:34:13.249 | 70.00th=[ 668], 80.00th=[ 709], 90.00th=[ 775], 95.00th=[ 848], 00:34:13.249 | 99.00th=[ 1004], 99.50th=[ 1045], 99.90th=[ 1385], 99.95th=[ 1385], 00:34:13.249 | 99.99th=[ 1385] 00:34:13.249 bw ( KiB/s): min= 4096, max= 4096, per=44.39%, avg=4096.00, stdev= 0.00, samples=1 00:34:13.249 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:13.249 lat (usec) : 250=0.86%, 500=19.28%, 750=43.74%, 1000=14.96% 00:34:13.249 lat (msec) : 2=19.42%, 50=1.73% 00:34:13.249 cpu : usr=0.40%, sys=1.00%, ctx=696, majf=0, minf=2 00:34:13.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.249 issued rwts: total=183,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.249 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:13.249 job2: (groupid=0, jobs=1): err= 0: pid=3262183: Tue Nov 5 04:45:26 2024 00:34:13.249 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:34:13.249 slat (nsec): min=25851, max=49403, avg=27571.13, stdev=2754.62 00:34:13.249 clat (usec): min=622, max=1181, avg=946.86, stdev=67.94 00:34:13.249 lat (usec): min=649, max=1207, avg=974.43, stdev=67.79 00:34:13.249 clat percentiles (usec): 00:34:13.249 | 1.00th=[ 750], 5.00th=[ 816], 10.00th=[ 857], 20.00th=[ 906], 00:34:13.249 | 30.00th=[ 930], 40.00th=[ 947], 50.00th=[ 955], 60.00th=[ 963], 00:34:13.249 | 70.00th=[ 971], 80.00th=[ 996], 90.00th=[ 1029], 95.00th=[ 1045], 00:34:13.249 | 99.00th=[ 1090], 99.50th=[ 1139], 99.90th=[ 1188], 99.95th=[ 1188], 00:34:13.249 | 99.99th=[ 1188] 00:34:13.249 write: IOPS=860, BW=3441KiB/s (3523kB/s)(3444KiB/1001msec); 0 zone resets 00:34:13.249 slat (nsec): min=9046, max=79951, avg=30774.92, stdev=10279.53 00:34:13.249 clat (usec): min=194, max=984, avg=538.63, stdev=136.04 00:34:13.249 lat (usec): min=234, max=1023, avg=569.40, stdev=137.57 00:34:13.249 clat percentiles (usec): 00:34:13.249 | 1.00th=[ 243], 5.00th=[ 314], 10.00th=[ 355], 20.00th=[ 412], 00:34:13.249 | 30.00th=[ 465], 40.00th=[ 506], 50.00th=[ 553], 60.00th=[ 586], 00:34:13.249 | 70.00th=[ 611], 80.00th=[ 668], 90.00th=[ 701], 95.00th=[ 742], 00:34:13.249 | 99.00th=[ 832], 99.50th=[ 865], 99.90th=[ 988], 99.95th=[ 988], 00:34:13.249 | 99.99th=[ 988] 00:34:13.249 bw ( KiB/s): min= 4096, max= 4096, per=44.39%, avg=4096.00, stdev= 0.00, samples=1 00:34:13.249 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:13.249 lat (usec) : 250=1.31%, 500=22.72%, 750=36.49%, 1000=33.79% 00:34:13.249 lat (msec) : 2=5.68% 00:34:13.249 cpu : usr=3.50%, sys=4.80%, ctx=1374, majf=0, minf=2 00:34:13.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.249 issued rwts: total=512,861,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.249 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:13.249 job3: (groupid=0, jobs=1): err= 0: pid=3262190: Tue Nov 5 04:45:26 2024 00:34:13.249 read: IOPS=16, BW=66.8KiB/s (68.4kB/s)(68.0KiB/1018msec) 00:34:13.249 slat (nsec): min=9954, max=26713, avg=25558.47, stdev=4022.81 00:34:13.249 clat (usec): min=1161, max=42101, avg=39545.90, stdev=9891.97 00:34:13.249 lat (usec): min=1171, max=42127, avg=39571.46, stdev=9895.99 00:34:13.249 clat percentiles (usec): 00:34:13.249 | 1.00th=[ 1156], 5.00th=[ 1156], 10.00th=[41681], 20.00th=[41681], 00:34:13.249 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:34:13.249 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:13.249 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:13.249 | 99.99th=[42206] 00:34:13.249 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:34:13.249 slat (nsec): min=9964, max=72142, avg=29926.16, stdev=10087.20 00:34:13.249 clat (usec): min=314, max=1054, avg=633.62, stdev=122.02 00:34:13.249 lat (usec): min=325, max=1088, avg=663.55, stdev=126.22 00:34:13.249 clat percentiles (usec): 00:34:13.249 | 1.00th=[ 367], 5.00th=[ 400], 10.00th=[ 469], 20.00th=[ 519], 00:34:13.249 | 30.00th=[ 578], 40.00th=[ 611], 50.00th=[ 644], 60.00th=[ 676], 00:34:13.249 | 70.00th=[ 709], 80.00th=[ 734], 90.00th=[ 783], 95.00th=[ 807], 00:34:13.249 | 99.00th=[ 881], 99.50th=[ 898], 99.90th=[ 1057], 99.95th=[ 1057], 00:34:13.249 | 99.99th=[ 1057] 00:34:13.249 bw ( KiB/s): min= 4096, max= 4096, per=44.39%, avg=4096.00, stdev= 0.00, samples=1 00:34:13.249 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:13.249 lat (usec) : 500=15.88%, 750=65.03%, 1000=15.69% 00:34:13.249 lat (msec) : 2=0.38%, 50=3.02% 00:34:13.249 cpu : usr=0.59%, sys=1.67%, ctx=531, majf=0, minf=1 00:34:13.249 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:13.249 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.249 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:13.249 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:13.249 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:13.249 00:34:13.249 Run status group 0 (all jobs): 00:34:13.249 READ: bw=2807KiB/s (2874kB/s), 65.4KiB/s-2046KiB/s (67.0kB/s-2095kB/s), io=2916KiB (2986kB), run=1001-1039msec 00:34:13.249 WRITE: bw=9228KiB/s (9450kB/s), 1971KiB/s-3441KiB/s (2018kB/s-3523kB/s), io=9588KiB (9818kB), run=1001-1039msec 00:34:13.249 00:34:13.249 Disk stats (read/write): 00:34:13.249 nvme0n1: ios=64/512, merge=0/0, ticks=729/239, in_queue=968, util=96.49% 00:34:13.249 nvme0n2: ios=157/512, merge=0/0, ticks=540/294, in_queue=834, util=86.41% 00:34:13.249 nvme0n3: ios=540/585, merge=0/0, ticks=982/271, in_queue=1253, util=95.88% 00:34:13.249 nvme0n4: ios=37/512, merge=0/0, ticks=1382/310, in_queue=1692, util=96.58% 00:34:13.249 04:45:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:13.249 [global] 00:34:13.249 thread=1 00:34:13.249 invalidate=1 00:34:13.249 rw=write 00:34:13.249 time_based=1 00:34:13.249 runtime=1 00:34:13.249 ioengine=libaio 00:34:13.249 direct=1 00:34:13.249 bs=4096 00:34:13.249 iodepth=128 00:34:13.249 norandommap=0 00:34:13.249 numjobs=1 00:34:13.249 00:34:13.249 verify_dump=1 00:34:13.249 verify_backlog=512 00:34:13.249 verify_state_save=0 00:34:13.249 do_verify=1 00:34:13.249 verify=crc32c-intel 00:34:13.249 [job0] 00:34:13.249 filename=/dev/nvme0n1 00:34:13.249 [job1] 00:34:13.249 filename=/dev/nvme0n2 00:34:13.249 [job2] 00:34:13.249 filename=/dev/nvme0n3 00:34:13.249 [job3] 00:34:13.249 filename=/dev/nvme0n4 00:34:13.249 Could not set queue depth (nvme0n1) 00:34:13.249 Could not set queue depth (nvme0n2) 00:34:13.249 Could not set queue depth (nvme0n3) 00:34:13.249 Could not set queue depth (nvme0n4) 00:34:13.513 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:13.513 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:13.513 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:13.513 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:13.513 fio-3.35 00:34:13.513 Starting 4 threads 00:34:14.916 00:34:14.916 job0: (groupid=0, jobs=1): err= 0: pid=3262620: Tue Nov 5 04:45:28 2024 00:34:14.916 read: IOPS=5717, BW=22.3MiB/s (23.4MB/s)(22.5MiB/1007msec) 00:34:14.916 slat (nsec): min=960, max=12370k, avg=78349.94, stdev=568046.81 00:34:14.916 clat (usec): min=1687, max=64668, avg=9955.29, stdev=5318.11 00:34:14.916 lat (usec): min=3962, max=64674, avg=10033.64, stdev=5376.36 00:34:14.916 clat percentiles (usec): 00:34:14.916 | 1.00th=[ 4228], 5.00th=[ 5473], 10.00th=[ 6259], 20.00th=[ 7242], 00:34:14.916 | 30.00th=[ 7898], 40.00th=[ 8356], 50.00th=[ 8717], 60.00th=[ 9503], 00:34:14.916 | 70.00th=[10290], 80.00th=[11731], 90.00th=[14222], 95.00th=[16188], 00:34:14.916 | 99.00th=[37487], 99.50th=[45351], 99.90th=[60031], 99.95th=[64750], 00:34:14.916 | 99.99th=[64750] 00:34:14.916 write: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec); 0 zone resets 00:34:14.916 slat (nsec): min=1610, max=10325k, avg=84682.68, stdev=575170.30 00:34:14.916 clat (usec): min=1135, max=66552, avg=11455.23, stdev=11145.52 00:34:14.916 lat (usec): min=1144, max=66560, avg=11539.92, stdev=11209.33 00:34:14.916 clat percentiles (usec): 00:34:14.916 | 1.00th=[ 3949], 5.00th=[ 4490], 10.00th=[ 5145], 20.00th=[ 5997], 00:34:14.917 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 8160], 60.00th=[ 9241], 00:34:14.917 | 70.00th=[10421], 80.00th=[12649], 90.00th=[15926], 95.00th=[34866], 00:34:14.917 | 99.00th=[64750], 99.50th=[65799], 99.90th=[66323], 99.95th=[66323], 00:34:14.917 | 99.99th=[66323] 00:34:14.917 bw ( KiB/s): min=18368, max=30768, per=27.17%, avg=24568.00, stdev=8768.12, samples=2 00:34:14.917 iops : min= 4592, max= 7692, avg=6142.00, stdev=2192.03, samples=2 00:34:14.917 lat (msec) : 2=0.03%, 4=0.79%, 10=65.59%, 20=28.38%, 50=3.29% 00:34:14.917 lat (msec) : 100=1.92% 00:34:14.917 cpu : usr=4.97%, sys=5.96%, ctx=345, majf=0, minf=1 00:34:14.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:14.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:14.917 issued rwts: total=5758,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:14.917 job1: (groupid=0, jobs=1): err= 0: pid=3262638: Tue Nov 5 04:45:28 2024 00:34:14.917 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:34:14.917 slat (nsec): min=906, max=12185k, avg=117664.04, stdev=812536.93 00:34:14.917 clat (usec): min=2902, max=55648, avg=14762.19, stdev=10444.56 00:34:14.917 lat (usec): min=2911, max=55656, avg=14879.85, stdev=10533.82 00:34:14.917 clat percentiles (usec): 00:34:14.917 | 1.00th=[ 5014], 5.00th=[ 5473], 10.00th=[ 6652], 20.00th=[ 7046], 00:34:14.917 | 30.00th=[ 7767], 40.00th=[ 9372], 50.00th=[10421], 60.00th=[11731], 00:34:14.917 | 70.00th=[14484], 80.00th=[22676], 90.00th=[32113], 95.00th=[38536], 00:34:14.917 | 99.00th=[43779], 99.50th=[47449], 99.90th=[55837], 99.95th=[55837], 00:34:14.917 | 99.99th=[55837] 00:34:14.917 write: IOPS=4042, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1009msec); 0 zone resets 00:34:14.917 slat (usec): min=2, max=16977, avg=128.79, stdev=846.54 00:34:14.917 clat (usec): min=1224, max=72249, avg=18404.57, stdev=17738.26 00:34:14.917 lat (usec): min=1236, max=72276, avg=18533.36, stdev=17867.89 00:34:14.917 clat percentiles (usec): 00:34:14.917 | 1.00th=[ 3163], 5.00th=[ 4621], 10.00th=[ 5145], 20.00th=[ 6587], 00:34:14.917 | 30.00th=[ 7177], 40.00th=[ 9241], 50.00th=[11731], 60.00th=[15270], 00:34:14.917 | 70.00th=[20055], 80.00th=[25035], 90.00th=[51643], 95.00th=[67634], 00:34:14.917 | 99.00th=[70779], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:34:14.917 | 99.99th=[71828] 00:34:14.917 bw ( KiB/s): min= 8192, max=23424, per=17.48%, avg=15808.00, stdev=10770.65, samples=2 00:34:14.917 iops : min= 2048, max= 5856, avg=3952.00, stdev=2692.66, samples=2 00:34:14.917 lat (msec) : 2=0.03%, 4=1.21%, 10=46.31%, 20=25.80%, 50=20.87% 00:34:14.917 lat (msec) : 100=5.78% 00:34:14.917 cpu : usr=2.98%, sys=4.46%, ctx=241, majf=0, minf=1 00:34:14.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:14.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:14.917 issued rwts: total=3584,4079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:14.917 job2: (groupid=0, jobs=1): err= 0: pid=3262654: Tue Nov 5 04:45:28 2024 00:34:14.917 read: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec) 00:34:14.917 slat (nsec): min=931, max=42383k, avg=79497.40, stdev=706692.72 00:34:14.917 clat (usec): min=1868, max=57384, avg=10283.35, stdev=5204.40 00:34:14.917 lat (usec): min=1879, max=57394, avg=10362.85, stdev=5248.57 00:34:14.917 clat percentiles (usec): 00:34:14.917 | 1.00th=[ 2704], 5.00th=[ 4686], 10.00th=[ 5800], 20.00th=[ 6652], 00:34:14.917 | 30.00th=[ 7046], 40.00th=[ 7898], 50.00th=[ 9503], 60.00th=[10683], 00:34:14.917 | 70.00th=[12780], 80.00th=[13829], 90.00th=[15270], 95.00th=[16450], 00:34:14.917 | 99.00th=[26870], 99.50th=[54264], 99.90th=[57410], 99.95th=[57410], 00:34:14.917 | 99.99th=[57410] 00:34:14.917 write: IOPS=6155, BW=24.0MiB/s (25.2MB/s)(24.2MiB/1005msec); 0 zone resets 00:34:14.917 slat (nsec): min=1591, max=11705k, avg=75709.24, stdev=550270.30 00:34:14.917 clat (usec): min=1468, max=64230, avg=10321.01, stdev=6991.57 00:34:14.917 lat (usec): min=1476, max=75934, avg=10396.72, stdev=7024.43 00:34:14.917 clat percentiles (usec): 00:34:14.917 | 1.00th=[ 3556], 5.00th=[ 4555], 10.00th=[ 5211], 20.00th=[ 6063], 00:34:14.917 | 30.00th=[ 6718], 40.00th=[ 7504], 50.00th=[ 8586], 60.00th=[10159], 00:34:14.917 | 70.00th=[11600], 80.00th=[13698], 90.00th=[15926], 95.00th=[17695], 00:34:14.917 | 99.00th=[52691], 99.50th=[53216], 99.90th=[64226], 99.95th=[64226], 00:34:14.917 | 99.99th=[64226] 00:34:14.917 bw ( KiB/s): min=22808, max=26344, per=27.18%, avg=24576.00, stdev=2500.33, samples=2 00:34:14.917 iops : min= 5702, max= 6586, avg=6144.00, stdev=625.08, samples=2 00:34:14.917 lat (msec) : 2=0.27%, 4=2.07%, 10=54.75%, 20=40.61%, 50=1.27% 00:34:14.917 lat (msec) : 100=1.03% 00:34:14.917 cpu : usr=3.59%, sys=6.97%, ctx=384, majf=0, minf=1 00:34:14.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:14.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:14.917 issued rwts: total=6144,6186,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:14.917 job3: (groupid=0, jobs=1): err= 0: pid=3262661: Tue Nov 5 04:45:28 2024 00:34:14.917 read: IOPS=6083, BW=23.8MiB/s (24.9MB/s)(24.0MiB/1010msec) 00:34:14.917 slat (nsec): min=927, max=14141k, avg=77895.40, stdev=531001.33 00:34:14.917 clat (usec): min=3453, max=43740, avg=10588.32, stdev=6675.09 00:34:14.917 lat (usec): min=3465, max=43764, avg=10666.21, stdev=6726.65 00:34:14.917 clat percentiles (usec): 00:34:14.917 | 1.00th=[ 3687], 5.00th=[ 5342], 10.00th=[ 6194], 20.00th=[ 6718], 00:34:14.917 | 30.00th=[ 7111], 40.00th=[ 7439], 50.00th=[ 8356], 60.00th=[10290], 00:34:14.917 | 70.00th=[11338], 80.00th=[12125], 90.00th=[13829], 95.00th=[26346], 00:34:14.917 | 99.00th=[38536], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:34:14.917 | 99.99th=[43779] 00:34:14.917 write: IOPS=6361, BW=24.8MiB/s (26.1MB/s)(25.1MiB/1010msec); 0 zone resets 00:34:14.917 slat (nsec): min=1567, max=10890k, avg=71633.63, stdev=528208.32 00:34:14.917 clat (usec): min=492, max=37395, avg=9813.71, stdev=5341.32 00:34:14.917 lat (usec): min=562, max=37398, avg=9885.35, stdev=5369.61 00:34:14.917 clat percentiles (usec): 00:34:14.917 | 1.00th=[ 3752], 5.00th=[ 4490], 10.00th=[ 5014], 20.00th=[ 5866], 00:34:14.917 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 9110], 60.00th=[10159], 00:34:14.917 | 70.00th=[11600], 80.00th=[13042], 90.00th=[13960], 95.00th=[21103], 00:34:14.917 | 99.00th=[33424], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:34:14.917 | 99.99th=[37487] 00:34:14.917 bw ( KiB/s): min=24576, max=25800, per=27.85%, avg=25188.00, stdev=865.50, samples=2 00:34:14.917 iops : min= 6144, max= 6450, avg=6297.00, stdev=216.37, samples=2 00:34:14.917 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.06% 00:34:14.917 lat (msec) : 2=0.06%, 4=1.07%, 10=56.59%, 20=35.63%, 50=6.55% 00:34:14.917 cpu : usr=4.56%, sys=6.94%, ctx=378, majf=0, minf=1 00:34:14.917 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:14.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:14.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:14.917 issued rwts: total=6144,6425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:14.917 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:14.917 00:34:14.917 Run status group 0 (all jobs): 00:34:14.917 READ: bw=83.7MiB/s (87.7MB/s), 13.9MiB/s-23.9MiB/s (14.5MB/s-25.0MB/s), io=84.5MiB (88.6MB), run=1005-1010msec 00:34:14.917 WRITE: bw=88.3MiB/s (92.6MB/s), 15.8MiB/s-24.8MiB/s (16.6MB/s-26.1MB/s), io=89.2MiB (93.5MB), run=1005-1010msec 00:34:14.917 00:34:14.917 Disk stats (read/write): 00:34:14.917 nvme0n1: ios=5170/5632, merge=0/0, ticks=48894/52296, in_queue=101190, util=87.47% 00:34:14.917 nvme0n2: ios=2566/2565, merge=0/0, ticks=28586/50167, in_queue=78753, util=90.56% 00:34:14.917 nvme0n3: ios=4608/4783, merge=0/0, ticks=27202/30572, in_queue=57774, util=88.32% 00:34:14.917 nvme0n4: ios=5344/5632, merge=0/0, ticks=36039/38313, in_queue=74352, util=88.92% 00:34:14.917 04:45:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:14.917 [global] 00:34:14.917 thread=1 00:34:14.917 invalidate=1 00:34:14.917 rw=randwrite 00:34:14.917 time_based=1 00:34:14.917 runtime=1 00:34:14.917 ioengine=libaio 00:34:14.917 direct=1 00:34:14.917 bs=4096 00:34:14.917 iodepth=128 00:34:14.917 norandommap=0 00:34:14.917 numjobs=1 00:34:14.917 00:34:14.917 verify_dump=1 00:34:14.917 verify_backlog=512 00:34:14.917 verify_state_save=0 00:34:14.917 do_verify=1 00:34:14.917 verify=crc32c-intel 00:34:14.917 [job0] 00:34:14.917 filename=/dev/nvme0n1 00:34:14.917 [job1] 00:34:14.917 filename=/dev/nvme0n2 00:34:14.917 [job2] 00:34:14.917 filename=/dev/nvme0n3 00:34:14.917 [job3] 00:34:14.917 filename=/dev/nvme0n4 00:34:14.917 Could not set queue depth (nvme0n1) 00:34:14.917 Could not set queue depth (nvme0n2) 00:34:14.917 Could not set queue depth (nvme0n3) 00:34:14.917 Could not set queue depth (nvme0n4) 00:34:15.180 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:15.180 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:15.180 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:15.180 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:15.180 fio-3.35 00:34:15.180 Starting 4 threads 00:34:16.606 00:34:16.606 job0: (groupid=0, jobs=1): err= 0: pid=3263100: Tue Nov 5 04:45:29 2024 00:34:16.606 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:34:16.606 slat (nsec): min=893, max=12105k, avg=90203.07, stdev=671500.11 00:34:16.606 clat (usec): min=3086, max=46436, avg=11640.93, stdev=6605.49 00:34:16.606 lat (usec): min=3100, max=46466, avg=11731.13, stdev=6664.31 00:34:16.606 clat percentiles (usec): 00:34:16.606 | 1.00th=[ 4490], 5.00th=[ 5735], 10.00th=[ 6652], 20.00th=[ 7570], 00:34:16.606 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 9372], 60.00th=[10814], 00:34:16.606 | 70.00th=[11600], 80.00th=[13566], 90.00th=[21890], 95.00th=[26346], 00:34:16.606 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36963], 99.95th=[42206], 00:34:16.606 | 99.99th=[46400] 00:34:16.606 write: IOPS=5852, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1005msec); 0 zone resets 00:34:16.606 slat (nsec): min=1515, max=20118k, avg=78398.13, stdev=690816.29 00:34:16.606 clat (usec): min=1037, max=61366, avg=10537.76, stdev=6904.86 00:34:16.606 lat (usec): min=1047, max=61395, avg=10616.15, stdev=6964.33 00:34:16.606 clat percentiles (usec): 00:34:16.606 | 1.00th=[ 2704], 5.00th=[ 5014], 10.00th=[ 5407], 20.00th=[ 6849], 00:34:16.606 | 30.00th=[ 7570], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 9634], 00:34:16.606 | 70.00th=[10552], 80.00th=[12387], 90.00th=[17695], 95.00th=[23987], 00:34:16.606 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[50594], 00:34:16.606 | 99.99th=[61604] 00:34:16.606 bw ( KiB/s): min=15328, max=30704, per=28.90%, avg=23016.00, stdev=10872.47, samples=2 00:34:16.606 iops : min= 3832, max= 7676, avg=5754.00, stdev=2718.12, samples=2 00:34:16.606 lat (msec) : 2=0.41%, 4=1.29%, 10=58.05%, 20=31.39%, 50=8.83% 00:34:16.606 lat (msec) : 100=0.03% 00:34:16.606 cpu : usr=4.98%, sys=4.98%, ctx=483, majf=0, minf=1 00:34:16.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:16.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:16.606 issued rwts: total=5632,5882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:16.606 job1: (groupid=0, jobs=1): err= 0: pid=3263103: Tue Nov 5 04:45:29 2024 00:34:16.606 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:34:16.606 slat (nsec): min=1068, max=16332k, avg=83627.01, stdev=706825.16 00:34:16.606 clat (usec): min=3985, max=39167, avg=11143.17, stdev=6173.13 00:34:16.606 lat (usec): min=3995, max=39172, avg=11226.80, stdev=6221.16 00:34:16.606 clat percentiles (usec): 00:34:16.606 | 1.00th=[ 4293], 5.00th=[ 4948], 10.00th=[ 5538], 20.00th=[ 6259], 00:34:16.606 | 30.00th=[ 6849], 40.00th=[ 7701], 50.00th=[ 8586], 60.00th=[10683], 00:34:16.606 | 70.00th=[13698], 80.00th=[15926], 90.00th=[20579], 95.00th=[23725], 00:34:16.606 | 99.00th=[33817], 99.50th=[35390], 99.90th=[39060], 99.95th=[39060], 00:34:16.606 | 99.99th=[39060] 00:34:16.606 write: IOPS=6406, BW=25.0MiB/s (26.2MB/s)(25.2MiB/1007msec); 0 zone resets 00:34:16.606 slat (nsec): min=1669, max=11132k, avg=70014.29, stdev=589311.53 00:34:16.606 clat (usec): min=2133, max=32016, avg=9111.33, stdev=4131.51 00:34:16.606 lat (usec): min=3445, max=32024, avg=9181.35, stdev=4166.78 00:34:16.606 clat percentiles (usec): 00:34:16.606 | 1.00th=[ 3687], 5.00th=[ 4424], 10.00th=[ 4817], 20.00th=[ 5735], 00:34:16.606 | 30.00th=[ 6521], 40.00th=[ 7046], 50.00th=[ 8356], 60.00th=[ 9241], 00:34:16.606 | 70.00th=[10683], 80.00th=[11994], 90.00th=[14353], 95.00th=[17171], 00:34:16.606 | 99.00th=[22938], 99.50th=[22938], 99.90th=[32113], 99.95th=[32113], 00:34:16.606 | 99.99th=[32113] 00:34:16.606 bw ( KiB/s): min=16384, max=34200, per=31.75%, avg=25292.00, stdev=12597.81, samples=2 00:34:16.606 iops : min= 4096, max= 8550, avg=6323.00, stdev=3149.45, samples=2 00:34:16.606 lat (msec) : 4=0.94%, 10=62.25%, 20=30.00%, 50=6.81% 00:34:16.606 cpu : usr=4.47%, sys=7.26%, ctx=264, majf=0, minf=1 00:34:16.606 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:34:16.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:16.606 issued rwts: total=6144,6451,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.606 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:16.606 job2: (groupid=0, jobs=1): err= 0: pid=3263117: Tue Nov 5 04:45:29 2024 00:34:16.606 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:34:16.606 slat (nsec): min=930, max=14300k, avg=121903.94, stdev=882463.22 00:34:16.606 clat (usec): min=1619, max=52381, avg=16175.35, stdev=7788.73 00:34:16.606 lat (usec): min=1634, max=52389, avg=16297.25, stdev=7857.33 00:34:16.606 clat percentiles (usec): 00:34:16.606 | 1.00th=[ 3621], 5.00th=[ 6783], 10.00th=[ 8455], 20.00th=[ 8979], 00:34:16.606 | 30.00th=[11469], 40.00th=[12649], 50.00th=[14484], 60.00th=[17171], 00:34:16.606 | 70.00th=[19530], 80.00th=[22414], 90.00th=[25297], 95.00th=[30540], 00:34:16.606 | 99.00th=[40633], 99.50th=[43779], 99.90th=[52167], 99.95th=[52167], 00:34:16.606 | 99.99th=[52167] 00:34:16.606 write: IOPS=3601, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1006msec); 0 zone resets 00:34:16.606 slat (nsec): min=1517, max=13351k, avg=144409.82, stdev=918115.95 00:34:16.606 clat (usec): min=547, max=115707, avg=19202.66, stdev=21525.21 00:34:16.606 lat (usec): min=556, max=115716, avg=19347.07, stdev=21669.07 00:34:16.606 clat percentiles (usec): 00:34:16.606 | 1.00th=[ 1385], 5.00th=[ 2606], 10.00th=[ 6128], 20.00th=[ 7570], 00:34:16.606 | 30.00th=[ 8586], 40.00th=[ 9896], 50.00th=[ 12911], 60.00th=[ 14484], 00:34:16.606 | 70.00th=[ 15795], 80.00th=[ 19530], 90.00th=[ 53216], 95.00th=[ 64226], 00:34:16.606 | 99.00th=[105382], 99.50th=[111674], 99.90th=[115868], 99.95th=[115868], 00:34:16.607 | 99.99th=[115868] 00:34:16.607 bw ( KiB/s): min=13160, max=15512, per=18.00%, avg=14336.00, stdev=1663.12, samples=2 00:34:16.607 iops : min= 3290, max= 3878, avg=3584.00, stdev=415.78, samples=2 00:34:16.607 lat (usec) : 750=0.04%, 1000=0.06% 00:34:16.607 lat (msec) : 2=2.47%, 4=1.30%, 10=30.83%, 20=41.68%, 50=17.41% 00:34:16.607 lat (msec) : 100=5.41%, 250=0.79% 00:34:16.607 cpu : usr=2.69%, sys=4.18%, ctx=289, majf=0, minf=1 00:34:16.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:34:16.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:16.607 issued rwts: total=3584,3623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:16.607 job3: (groupid=0, jobs=1): err= 0: pid=3263123: Tue Nov 5 04:45:29 2024 00:34:16.607 read: IOPS=3882, BW=15.2MiB/s (15.9MB/s)(15.3MiB/1007msec) 00:34:16.607 slat (nsec): min=964, max=13967k, avg=124499.03, stdev=944535.23 00:34:16.607 clat (usec): min=1522, max=43586, avg=16472.88, stdev=6447.09 00:34:16.607 lat (usec): min=1627, max=46273, avg=16597.38, stdev=6481.21 00:34:16.607 clat percentiles (usec): 00:34:16.607 | 1.00th=[ 2769], 5.00th=[ 7046], 10.00th=[ 8848], 20.00th=[12125], 00:34:16.607 | 30.00th=[13042], 40.00th=[14353], 50.00th=[15664], 60.00th=[16909], 00:34:16.607 | 70.00th=[19006], 80.00th=[21365], 90.00th=[24511], 95.00th=[28443], 00:34:16.607 | 99.00th=[38536], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:34:16.607 | 99.99th=[43779] 00:34:16.607 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:34:16.607 slat (nsec): min=1516, max=16289k, avg=118955.60, stdev=893927.56 00:34:16.607 clat (usec): min=1077, max=49203, avg=15301.72, stdev=9228.34 00:34:16.607 lat (usec): min=1111, max=49213, avg=15420.68, stdev=9293.05 00:34:16.607 clat percentiles (usec): 00:34:16.607 | 1.00th=[ 1303], 5.00th=[ 5145], 10.00th=[ 7308], 20.00th=[ 8979], 00:34:16.607 | 30.00th=[10552], 40.00th=[11863], 50.00th=[12780], 60.00th=[14484], 00:34:16.607 | 70.00th=[15795], 80.00th=[18744], 90.00th=[28967], 95.00th=[36963], 00:34:16.607 | 99.00th=[47973], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:34:16.607 | 99.99th=[49021] 00:34:16.607 bw ( KiB/s): min=16384, max=16384, per=20.57%, avg=16384.00, stdev= 0.00, samples=2 00:34:16.607 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:34:16.607 lat (msec) : 2=0.89%, 4=1.62%, 10=16.41%, 20=58.44%, 50=22.63% 00:34:16.607 cpu : usr=3.08%, sys=4.08%, ctx=244, majf=0, minf=1 00:34:16.607 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:16.607 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:16.607 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:16.607 issued rwts: total=3910,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:16.607 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:16.607 00:34:16.607 Run status group 0 (all jobs): 00:34:16.607 READ: bw=74.8MiB/s (78.4MB/s), 13.9MiB/s-23.8MiB/s (14.6MB/s-25.0MB/s), io=75.3MiB (78.9MB), run=1005-1007msec 00:34:16.607 WRITE: bw=77.8MiB/s (81.6MB/s), 14.1MiB/s-25.0MiB/s (14.8MB/s-26.2MB/s), io=78.3MiB (82.1MB), run=1005-1007msec 00:34:16.607 00:34:16.607 Disk stats (read/write): 00:34:16.607 nvme0n1: ios=4658/5103, merge=0/0, ticks=39602/42196, in_queue=81798, util=91.78% 00:34:16.607 nvme0n2: ios=4859/5120, merge=0/0, ticks=54614/46624, in_queue=101238, util=96.22% 00:34:16.607 nvme0n3: ios=2613/3072, merge=0/0, ticks=40271/54919, in_queue=95190, util=95.78% 00:34:16.607 nvme0n4: ios=3382/3584, merge=0/0, ticks=49665/41789, in_queue=91454, util=89.41% 00:34:16.607 04:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:16.607 04:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3263415 00:34:16.607 04:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:16.607 04:45:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:16.607 [global] 00:34:16.607 thread=1 00:34:16.607 invalidate=1 00:34:16.607 rw=read 00:34:16.607 time_based=1 00:34:16.607 runtime=10 00:34:16.607 ioengine=libaio 00:34:16.607 direct=1 00:34:16.607 bs=4096 00:34:16.607 iodepth=1 00:34:16.607 norandommap=1 00:34:16.607 numjobs=1 00:34:16.607 00:34:16.607 [job0] 00:34:16.607 filename=/dev/nvme0n1 00:34:16.607 [job1] 00:34:16.607 filename=/dev/nvme0n2 00:34:16.607 [job2] 00:34:16.607 filename=/dev/nvme0n3 00:34:16.607 [job3] 00:34:16.607 filename=/dev/nvme0n4 00:34:16.607 Could not set queue depth (nvme0n1) 00:34:16.607 Could not set queue depth (nvme0n2) 00:34:16.607 Could not set queue depth (nvme0n3) 00:34:16.607 Could not set queue depth (nvme0n4) 00:34:16.872 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:16.872 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:16.872 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:16.872 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:16.872 fio-3.35 00:34:16.872 Starting 4 threads 00:34:19.435 04:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:19.435 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=253952, buflen=4096 00:34:19.435 fio: pid=3263617, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:19.697 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:19.697 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10457088, buflen=4096 00:34:19.697 fio: pid=3263612, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:19.697 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:19.697 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:19.958 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1478656, buflen=4096 00:34:19.958 fio: pid=3263605, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:19.958 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:19.958 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:20.219 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:20.219 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:20.219 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=307200, buflen=4096 00:34:20.219 fio: pid=3263606, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:20.219 00:34:20.219 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3263605: Tue Nov 5 04:45:33 2024 00:34:20.219 read: IOPS=122, BW=488KiB/s (500kB/s)(1444KiB/2956msec) 00:34:20.219 slat (usec): min=8, max=13115, avg=107.62, stdev=913.03 00:34:20.219 clat (usec): min=542, max=42204, avg=7997.84, stdev=15391.99 00:34:20.219 lat (usec): min=570, max=42231, avg=8105.69, stdev=15382.13 00:34:20.219 clat percentiles (usec): 00:34:20.219 | 1.00th=[ 611], 5.00th=[ 865], 10.00th=[ 922], 20.00th=[ 963], 00:34:20.219 | 30.00th=[ 979], 40.00th=[ 996], 50.00th=[ 1020], 60.00th=[ 1045], 00:34:20.219 | 70.00th=[ 1090], 80.00th=[ 1139], 90.00th=[41681], 95.00th=[42206], 00:34:20.219 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:20.219 | 99.99th=[42206] 00:34:20.219 bw ( KiB/s): min= 96, max= 96, per=2.48%, avg=96.00, stdev= 0.00, samples=5 00:34:20.219 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:34:20.219 lat (usec) : 750=2.21%, 1000=38.95% 00:34:20.219 lat (msec) : 2=41.44%, 50=17.13% 00:34:20.219 cpu : usr=0.14%, sys=0.54%, ctx=366, majf=0, minf=1 00:34:20.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:20.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.219 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.219 issued rwts: total=362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:20.219 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3263606: Tue Nov 5 04:45:33 2024 00:34:20.219 read: IOPS=24, BW=95.2KiB/s (97.5kB/s)(300KiB/3152msec) 00:34:20.219 slat (usec): min=24, max=26595, avg=477.00, stdev=3155.28 00:34:20.219 clat (usec): min=871, max=42077, avg=41246.58, stdev=4738.74 00:34:20.219 lat (usec): min=938, max=67942, avg=41729.60, stdev=5711.89 00:34:20.219 clat percentiles (usec): 00:34:20.219 | 1.00th=[ 873], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:34:20.219 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:34:20.219 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:20.219 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:20.219 | 99.99th=[42206] 00:34:20.219 bw ( KiB/s): min= 89, max= 96, per=2.43%, avg=94.83, stdev= 2.86, samples=6 00:34:20.219 iops : min= 22, max= 24, avg=23.67, stdev= 0.82, samples=6 00:34:20.219 lat (usec) : 1000=1.32% 00:34:20.219 lat (msec) : 50=97.37% 00:34:20.219 cpu : usr=0.10%, sys=0.00%, ctx=80, majf=0, minf=2 00:34:20.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:20.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.219 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.219 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:20.219 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3263612: Tue Nov 5 04:45:33 2024 00:34:20.219 read: IOPS=923, BW=3692KiB/s (3781kB/s)(9.97MiB/2766msec) 00:34:20.219 slat (usec): min=7, max=17505, avg=36.76, stdev=371.47 00:34:20.219 clat (usec): min=505, max=1741, avg=1031.76, stdev=81.17 00:34:20.219 lat (usec): min=533, max=18587, avg=1068.52, stdev=381.85 00:34:20.219 clat percentiles (usec): 00:34:20.219 | 1.00th=[ 807], 5.00th=[ 889], 10.00th=[ 930], 20.00th=[ 979], 00:34:20.219 | 30.00th=[ 1004], 40.00th=[ 1020], 50.00th=[ 1037], 60.00th=[ 1057], 00:34:20.219 | 70.00th=[ 1074], 80.00th=[ 1090], 90.00th=[ 1123], 95.00th=[ 1139], 00:34:20.219 | 99.00th=[ 1205], 99.50th=[ 1237], 99.90th=[ 1270], 99.95th=[ 1287], 00:34:20.219 | 99.99th=[ 1745] 00:34:20.219 bw ( KiB/s): min= 3704, max= 3832, per=96.85%, avg=3750.40, stdev=48.79, samples=5 00:34:20.219 iops : min= 926, max= 958, avg=937.60, stdev=12.20, samples=5 00:34:20.219 lat (usec) : 750=0.39%, 1000=28.39% 00:34:20.219 lat (msec) : 2=71.18% 00:34:20.219 cpu : usr=1.81%, sys=3.62%, ctx=2556, majf=0, minf=2 00:34:20.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:20.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.219 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.219 issued rwts: total=2554,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:20.219 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3263617: Tue Nov 5 04:45:33 2024 00:34:20.219 read: IOPS=24, BW=96.0KiB/s (98.3kB/s)(248KiB/2584msec) 00:34:20.219 slat (nsec): min=25605, max=36378, avg=26341.76, stdev=1379.86 00:34:20.219 clat (usec): min=1201, max=42158, avg=41293.32, stdev=5176.89 00:34:20.219 lat (usec): min=1237, max=42184, avg=41319.66, stdev=5175.60 00:34:20.219 clat percentiles (usec): 00:34:20.219 | 1.00th=[ 1205], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:34:20.219 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:34:20.219 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:20.219 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:20.219 | 99.99th=[42206] 00:34:20.219 bw ( KiB/s): min= 96, max= 96, per=2.48%, avg=96.00, stdev= 0.00, samples=5 00:34:20.219 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:34:20.219 lat (msec) : 2=1.59%, 50=96.83% 00:34:20.219 cpu : usr=0.12%, sys=0.00%, ctx=63, majf=0, minf=2 00:34:20.219 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:20.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.219 complete : 0=1.6%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:20.219 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:20.219 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:20.219 00:34:20.219 Run status group 0 (all jobs): 00:34:20.219 READ: bw=3872KiB/s (3965kB/s), 95.2KiB/s-3692KiB/s (97.5kB/s-3781kB/s), io=11.9MiB (12.5MB), run=2584-3152msec 00:34:20.219 00:34:20.219 Disk stats (read/write): 00:34:20.219 nvme0n1: ios=272/0, merge=0/0, ticks=2779/0, in_queue=2779, util=93.82% 00:34:20.219 nvme0n2: ios=73/0, merge=0/0, ticks=3013/0, in_queue=3013, util=94.64% 00:34:20.219 nvme0n3: ios=2423/0, merge=0/0, ticks=2337/0, in_queue=2337, util=95.99% 00:34:20.219 nvme0n4: ios=56/0, merge=0/0, ticks=2310/0, in_queue=2310, util=96.06% 00:34:20.220 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:20.220 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:20.481 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:20.481 04:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:20.742 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:20.742 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:20.742 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:20.742 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:21.003 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:21.003 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3263415 00:34:21.003 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:21.003 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:21.003 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:21.003 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:21.003 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:34:21.003 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:21.003 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:21.264 nvmf hotplug test: fio failed as expected 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:21.264 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:21.264 rmmod nvme_tcp 00:34:21.264 rmmod nvme_fabrics 00:34:21.264 rmmod nvme_keyring 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 3260239 ']' 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 3260239 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 3260239 ']' 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 3260239 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3260239 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3260239' 00:34:21.525 killing process with pid 3260239 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 3260239 00:34:21.525 04:45:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 3260239 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:21.525 04:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:24.075 00:34:24.075 real 0m27.775s 00:34:24.075 user 2m25.780s 00:34:24.075 sys 0m11.974s 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:24.075 ************************************ 00:34:24.075 END TEST nvmf_fio_target 00:34:24.075 ************************************ 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:24.075 ************************************ 00:34:24.075 START TEST nvmf_bdevio 00:34:24.075 ************************************ 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:24.075 * Looking for test storage... 00:34:24.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:24.075 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:24.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.076 --rc genhtml_branch_coverage=1 00:34:24.076 --rc genhtml_function_coverage=1 00:34:24.076 --rc genhtml_legend=1 00:34:24.076 --rc geninfo_all_blocks=1 00:34:24.076 --rc geninfo_unexecuted_blocks=1 00:34:24.076 00:34:24.076 ' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:24.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.076 --rc genhtml_branch_coverage=1 00:34:24.076 --rc genhtml_function_coverage=1 00:34:24.076 --rc genhtml_legend=1 00:34:24.076 --rc geninfo_all_blocks=1 00:34:24.076 --rc geninfo_unexecuted_blocks=1 00:34:24.076 00:34:24.076 ' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:24.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.076 --rc genhtml_branch_coverage=1 00:34:24.076 --rc genhtml_function_coverage=1 00:34:24.076 --rc genhtml_legend=1 00:34:24.076 --rc geninfo_all_blocks=1 00:34:24.076 --rc geninfo_unexecuted_blocks=1 00:34:24.076 00:34:24.076 ' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:24.076 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.076 --rc genhtml_branch_coverage=1 00:34:24.076 --rc genhtml_function_coverage=1 00:34:24.076 --rc genhtml_legend=1 00:34:24.076 --rc geninfo_all_blocks=1 00:34:24.076 --rc geninfo_unexecuted_blocks=1 00:34:24.076 00:34:24.076 ' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:24.076 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:24.077 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.077 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.077 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.077 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:24.077 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:24.077 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:34:24.077 04:45:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.222 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:32.222 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:32.223 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:32.223 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:32.223 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:34:32.223 00:34:32.223 --- 10.0.0.2 ping statistics --- 00:34:32.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.223 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:34:32.223 00:34:32.223 --- 10.0.0.1 ping statistics --- 00:34:32.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.223 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=3268632 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 3268632 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 3268632 ']' 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:32.223 04:45:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.223 [2024-11-05 04:45:44.919208] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:32.223 [2024-11-05 04:45:44.920349] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:34:32.223 [2024-11-05 04:45:44.920401] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:32.223 [2024-11-05 04:45:45.021658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:32.223 [2024-11-05 04:45:45.073670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:32.223 [2024-11-05 04:45:45.073724] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:32.223 [2024-11-05 04:45:45.073732] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:32.224 [2024-11-05 04:45:45.073739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:32.224 [2024-11-05 04:45:45.073757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:32.224 [2024-11-05 04:45:45.076210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:32.224 [2024-11-05 04:45:45.076372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:32.224 [2024-11-05 04:45:45.076533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:32.224 [2024-11-05 04:45:45.076534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:32.224 [2024-11-05 04:45:45.154354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:32.224 [2024-11-05 04:45:45.155682] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:32.224 [2024-11-05 04:45:45.155801] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:32.224 [2024-11-05 04:45:45.156019] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:32.224 [2024-11-05 04:45:45.156078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.224 [2024-11-05 04:45:45.781538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.224 Malloc0 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.224 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:32.485 [2024-11-05 04:45:45.873695] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:32.485 { 00:34:32.485 "params": { 00:34:32.485 "name": "Nvme$subsystem", 00:34:32.485 "trtype": "$TEST_TRANSPORT", 00:34:32.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:32.485 "adrfam": "ipv4", 00:34:32.485 "trsvcid": "$NVMF_PORT", 00:34:32.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:32.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:32.485 "hdgst": ${hdgst:-false}, 00:34:32.485 "ddgst": ${ddgst:-false} 00:34:32.485 }, 00:34:32.485 "method": "bdev_nvme_attach_controller" 00:34:32.485 } 00:34:32.485 EOF 00:34:32.485 )") 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:34:32.485 04:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:32.485 "params": { 00:34:32.485 "name": "Nvme1", 00:34:32.485 "trtype": "tcp", 00:34:32.485 "traddr": "10.0.0.2", 00:34:32.485 "adrfam": "ipv4", 00:34:32.485 "trsvcid": "4420", 00:34:32.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:32.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:32.485 "hdgst": false, 00:34:32.485 "ddgst": false 00:34:32.485 }, 00:34:32.485 "method": "bdev_nvme_attach_controller" 00:34:32.485 }' 00:34:32.485 [2024-11-05 04:45:45.933299] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:34:32.485 [2024-11-05 04:45:45.933375] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3268985 ] 00:34:32.485 [2024-11-05 04:45:46.010820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:32.485 [2024-11-05 04:45:46.055638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:32.485 [2024-11-05 04:45:46.055765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:32.485 [2024-11-05 04:45:46.055797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:32.746 I/O targets: 00:34:32.746 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:34:32.746 00:34:32.746 00:34:32.746 CUnit - A unit testing framework for C - Version 2.1-3 00:34:32.746 http://cunit.sourceforge.net/ 00:34:32.746 00:34:32.746 00:34:32.746 Suite: bdevio tests on: Nvme1n1 00:34:32.746 Test: blockdev write read block ...passed 00:34:32.746 Test: blockdev write zeroes read block ...passed 00:34:32.746 Test: blockdev write zeroes read no split ...passed 00:34:32.746 Test: blockdev write zeroes read split ...passed 00:34:32.746 Test: blockdev write zeroes read split partial ...passed 00:34:32.746 Test: blockdev reset ...[2024-11-05 04:45:46.359913] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:34:32.746 [2024-11-05 04:45:46.359982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210e970 (9): Bad file descriptor 00:34:32.746 [2024-11-05 04:45:46.366050] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:34:32.746 passed 00:34:33.006 Test: blockdev write read 8 blocks ...passed 00:34:33.006 Test: blockdev write read size > 128k ...passed 00:34:33.006 Test: blockdev write read invalid size ...passed 00:34:33.006 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:33.006 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:33.006 Test: blockdev write read max offset ...passed 00:34:33.006 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:33.006 Test: blockdev writev readv 8 blocks ...passed 00:34:33.006 Test: blockdev writev readv 30 x 1block ...passed 00:34:33.268 Test: blockdev writev readv block ...passed 00:34:33.268 Test: blockdev writev readv size > 128k ...passed 00:34:33.268 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:33.268 Test: blockdev comparev and writev ...[2024-11-05 04:45:46.676067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.268 [2024-11-05 04:45:46.676093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.676105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.268 [2024-11-05 04:45:46.676111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.676660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.268 [2024-11-05 04:45:46.676672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.676682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.268 [2024-11-05 04:45:46.676687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.677259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.268 [2024-11-05 04:45:46.677267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.677277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.268 [2024-11-05 04:45:46.677282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.677852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.268 [2024-11-05 04:45:46.677860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.677870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:34:33.268 [2024-11-05 04:45:46.677875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:33.268 passed 00:34:33.268 Test: blockdev nvme passthru rw ...passed 00:34:33.268 Test: blockdev nvme passthru vendor specific ...[2024-11-05 04:45:46.762594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:33.268 [2024-11-05 04:45:46.762605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.763012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:33.268 [2024-11-05 04:45:46.763019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.763346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:33.268 [2024-11-05 04:45:46.763353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:33.268 [2024-11-05 04:45:46.763688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:34:33.268 [2024-11-05 04:45:46.763695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:33.268 passed 00:34:33.268 Test: blockdev nvme admin passthru ...passed 00:34:33.268 Test: blockdev copy ...passed 00:34:33.268 00:34:33.268 Run Summary: Type Total Ran Passed Failed Inactive 00:34:33.268 suites 1 1 n/a 0 0 00:34:33.268 tests 23 23 23 0 0 00:34:33.268 asserts 152 152 152 0 n/a 00:34:33.268 00:34:33.268 Elapsed time = 1.266 seconds 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:33.530 04:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:33.530 rmmod nvme_tcp 00:34:33.530 rmmod nvme_fabrics 00:34:33.530 rmmod nvme_keyring 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 3268632 ']' 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 3268632 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 3268632 ']' 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 3268632 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3268632 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3268632' 00:34:33.530 killing process with pid 3268632 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 3268632 00:34:33.530 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 3268632 00:34:33.791 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:33.792 04:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:35.706 04:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:35.706 00:34:35.706 real 0m12.062s 00:34:35.706 user 0m9.134s 00:34:35.706 sys 0m6.427s 00:34:35.706 04:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:35.706 04:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:34:35.706 ************************************ 00:34:35.706 END TEST nvmf_bdevio 00:34:35.706 ************************************ 00:34:35.966 04:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:34:35.966 00:34:35.966 real 4m56.730s 00:34:35.966 user 10m22.829s 00:34:35.966 sys 2m3.079s 00:34:35.967 04:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:35.967 04:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:35.967 ************************************ 00:34:35.967 END TEST nvmf_target_core_interrupt_mode 00:34:35.967 ************************************ 00:34:35.967 04:45:49 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:35.967 04:45:49 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:34:35.967 04:45:49 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:35.967 04:45:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:35.967 ************************************ 00:34:35.967 START TEST nvmf_interrupt 00:34:35.967 ************************************ 00:34:35.967 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:34:35.967 * Looking for test storage... 00:34:35.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:35.967 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:35.967 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:34:35.967 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.227 --rc genhtml_branch_coverage=1 00:34:36.227 --rc genhtml_function_coverage=1 00:34:36.227 --rc genhtml_legend=1 00:34:36.227 --rc geninfo_all_blocks=1 00:34:36.227 --rc geninfo_unexecuted_blocks=1 00:34:36.227 00:34:36.227 ' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.227 --rc genhtml_branch_coverage=1 00:34:36.227 --rc genhtml_function_coverage=1 00:34:36.227 --rc genhtml_legend=1 00:34:36.227 --rc geninfo_all_blocks=1 00:34:36.227 --rc geninfo_unexecuted_blocks=1 00:34:36.227 00:34:36.227 ' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.227 --rc genhtml_branch_coverage=1 00:34:36.227 --rc genhtml_function_coverage=1 00:34:36.227 --rc genhtml_legend=1 00:34:36.227 --rc geninfo_all_blocks=1 00:34:36.227 --rc geninfo_unexecuted_blocks=1 00:34:36.227 00:34:36.227 ' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:36.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:36.227 --rc genhtml_branch_coverage=1 00:34:36.227 --rc genhtml_function_coverage=1 00:34:36.227 --rc genhtml_legend=1 00:34:36.227 --rc geninfo_all_blocks=1 00:34:36.227 --rc geninfo_unexecuted_blocks=1 00:34:36.227 00:34:36.227 ' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:34:36.227 04:45:49 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:44.370 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:44.370 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:44.370 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:44.371 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:44.371 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:44.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:44.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:34:44.371 00:34:44.371 --- 10.0.0.2 ping statistics --- 00:34:44.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.371 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:44.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:44.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:34:44.371 00:34:44.371 --- 10.0.0.1 ping statistics --- 00:34:44.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:44.371 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=3273326 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 3273326 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 3273326 ']' 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:44.371 04:45:56 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.371 [2024-11-05 04:45:57.002089] thread.c:2964:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:44.371 [2024-11-05 04:45:57.003059] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:34:44.371 [2024-11-05 04:45:57.003096] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.371 [2024-11-05 04:45:57.077878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:44.371 [2024-11-05 04:45:57.112947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.371 [2024-11-05 04:45:57.112980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.371 [2024-11-05 04:45:57.112988] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.371 [2024-11-05 04:45:57.112994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.371 [2024-11-05 04:45:57.113000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.371 [2024-11-05 04:45:57.114120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.371 [2024-11-05 04:45:57.114121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.371 [2024-11-05 04:45:57.168464] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:44.371 [2024-11-05 04:45:57.168901] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:44.371 [2024-11-05 04:45:57.169269] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:34:44.371 5000+0 records in 00:34:44.371 5000+0 records out 00:34:44.371 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0185455 s, 552 MB/s 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.371 AIO0 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.371 [2024-11-05 04:45:57.310698] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:44.371 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:44.372 [2024-11-05 04:45:57.339513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3273326 0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3273326 0 idle 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3273326 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3273326 -w 256 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3273326 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:00.21 reactor_0' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3273326 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:00.21 reactor_0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3273326 1 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3273326 1 idle 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3273326 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3273326 -w 256 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3273332 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:00.00 reactor_1' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3273332 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:00.00 reactor_1 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3273378 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3273326 0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3273326 0 busy 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3273326 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3273326 -w 256 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3273326 root 20 0 128.2g 46080 33408 R 99.9 0.0 0:00.42 reactor_0' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3273326 root 20 0 128.2g 46080 33408 R 99.9 0.0 0:00.42 reactor_0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3273326 1 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3273326 1 busy 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3273326 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3273326 -w 256 00:34:44.372 04:45:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3273332 root 20 0 128.2g 46080 33408 R 99.9 0.0 0:00.29 reactor_1' 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3273332 root 20 0 128.2g 46080 33408 R 99.9 0.0 0:00.29 reactor_1 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:44.634 04:45:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3273378 00:34:54.640 Initializing NVMe Controllers 00:34:54.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:54.640 Controller IO queue size 256, less than required. 00:34:54.640 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:54.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:54.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:54.640 Initialization complete. Launching workers. 00:34:54.640 ======================================================== 00:34:54.640 Latency(us) 00:34:54.640 Device Information : IOPS MiB/s Average min max 00:34:54.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19503.55 76.19 13131.06 4428.12 29609.15 00:34:54.640 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16510.46 64.49 15511.29 7728.45 18847.77 00:34:54.640 ======================================================== 00:34:54.640 Total : 36014.01 140.68 14222.27 4428.12 29609.15 00:34:54.640 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3273326 0 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3273326 0 idle 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3273326 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3273326 -w 256 00:34:54.640 04:46:07 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:54.640 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3273326 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:20.21 reactor_0' 00:34:54.640 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3273326 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:20.21 reactor_0 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3273326 1 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3273326 1 idle 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3273326 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3273326 -w 256 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3273332 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:10.00 reactor_1' 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3273332 root 20 0 128.2g 46080 33408 S 0.0 0.0 0:10.00 reactor_1 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:54.641 04:46:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:55.212 04:46:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:55.212 04:46:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:34:55.212 04:46:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:55.212 04:46:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:55.212 04:46:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3273326 0 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3273326 0 idle 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3273326 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3273326 -w 256 00:34:57.125 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:57.385 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3273326 root 20 0 128.2g 80640 33408 S 0.0 0.1 0:20.44 reactor_0' 00:34:57.385 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3273326 root 20 0 128.2g 80640 33408 S 0.0 0.1 0:20.44 reactor_0 00:34:57.385 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:57.385 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:57.385 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3273326 1 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3273326 1 idle 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3273326 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3273326 -w 256 00:34:57.386 04:46:10 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3273332 root 20 0 128.2g 80640 33408 S 0.0 0.1 0:10.14 reactor_1' 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3273332 root 20 0 128.2g 80640 33408 S 0.0 0.1 0:10.14 reactor_1 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:57.645 04:46:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:57.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:57.905 rmmod nvme_tcp 00:34:57.905 rmmod nvme_fabrics 00:34:57.905 rmmod nvme_keyring 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 3273326 ']' 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 3273326 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 3273326 ']' 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 3273326 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3273326 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3273326' 00:34:57.905 killing process with pid 3273326 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 3273326 00:34:57.905 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 3273326 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:58.165 04:46:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:00.077 04:46:13 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:00.077 00:35:00.077 real 0m24.257s 00:35:00.077 user 0m40.001s 00:35:00.077 sys 0m9.281s 00:35:00.077 04:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:00.078 04:46:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:00.078 ************************************ 00:35:00.078 END TEST nvmf_interrupt 00:35:00.078 ************************************ 00:35:00.339 00:35:00.339 real 29m36.524s 00:35:00.339 user 61m27.297s 00:35:00.339 sys 9m54.121s 00:35:00.339 04:46:13 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:00.339 04:46:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.339 ************************************ 00:35:00.339 END TEST nvmf_tcp 00:35:00.339 ************************************ 00:35:00.339 04:46:13 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:35:00.339 04:46:13 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:00.339 04:46:13 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:00.339 04:46:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:00.339 04:46:13 -- common/autotest_common.sh@10 -- # set +x 00:35:00.339 ************************************ 00:35:00.339 START TEST spdkcli_nvmf_tcp 00:35:00.339 ************************************ 00:35:00.339 04:46:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:00.339 * Looking for test storage... 00:35:00.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:00.339 04:46:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:00.339 04:46:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:35:00.339 04:46:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:00.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.601 --rc genhtml_branch_coverage=1 00:35:00.601 --rc genhtml_function_coverage=1 00:35:00.601 --rc genhtml_legend=1 00:35:00.601 --rc geninfo_all_blocks=1 00:35:00.601 --rc geninfo_unexecuted_blocks=1 00:35:00.601 00:35:00.601 ' 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:00.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.601 --rc genhtml_branch_coverage=1 00:35:00.601 --rc genhtml_function_coverage=1 00:35:00.601 --rc genhtml_legend=1 00:35:00.601 --rc geninfo_all_blocks=1 00:35:00.601 --rc geninfo_unexecuted_blocks=1 00:35:00.601 00:35:00.601 ' 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:00.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.601 --rc genhtml_branch_coverage=1 00:35:00.601 --rc genhtml_function_coverage=1 00:35:00.601 --rc genhtml_legend=1 00:35:00.601 --rc geninfo_all_blocks=1 00:35:00.601 --rc geninfo_unexecuted_blocks=1 00:35:00.601 00:35:00.601 ' 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:00.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:00.601 --rc genhtml_branch_coverage=1 00:35:00.601 --rc genhtml_function_coverage=1 00:35:00.601 --rc genhtml_legend=1 00:35:00.601 --rc geninfo_all_blocks=1 00:35:00.601 --rc geninfo_unexecuted_blocks=1 00:35:00.601 00:35:00.601 ' 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.601 04:46:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:00.602 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3276706 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3276706 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 3276706 ']' 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:00.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:00.602 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:00.602 [2024-11-05 04:46:14.131775] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:35:00.602 [2024-11-05 04:46:14.131850] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3276706 ] 00:35:00.602 [2024-11-05 04:46:14.209510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:00.863 [2024-11-05 04:46:14.253395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:00.863 [2024-11-05 04:46:14.253397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:01.434 04:46:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:01.434 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:01.434 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:01.434 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:01.434 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:01.434 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:01.434 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:01.434 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:01.434 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:01.434 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:01.434 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:01.434 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:01.434 ' 00:35:04.732 [2024-11-05 04:46:17.676855] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.674 [2024-11-05 04:46:19.037197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:08.216 [2024-11-05 04:46:21.568797] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:10.759 [2024-11-05 04:46:23.775305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:12.142 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:12.142 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:12.142 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:12.142 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:12.142 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:12.142 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:12.142 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:12.142 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:12.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:12.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:12.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:12.142 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:12.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:12.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:12.142 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:12.142 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:12.143 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:12.143 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:12.143 04:46:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:12.143 04:46:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:12.143 04:46:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.143 04:46:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:12.143 04:46:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:12.143 04:46:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.143 04:46:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:12.143 04:46:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:12.403 04:46:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:12.403 04:46:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:12.403 04:46:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:12.403 04:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:12.403 04:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.663 04:46:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:12.663 04:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:12.663 04:46:26 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:12.664 04:46:26 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:12.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:12.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:12.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:12.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:12.664 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:12.664 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:12.664 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:12.664 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:12.664 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:12.664 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:12.664 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:12.664 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:12.664 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:12.664 ' 00:35:18.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:18.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:18.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:18.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:18.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:18.045 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:18.045 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:18.045 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:18.045 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:18.045 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:18.045 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:18.045 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:18.045 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:18.045 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3276706 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3276706 ']' 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3276706 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3276706 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3276706' 00:35:18.045 killing process with pid 3276706 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 3276706 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 3276706 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3276706 ']' 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3276706 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 3276706 ']' 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 3276706 00:35:18.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3276706) - No such process 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 3276706 is not found' 00:35:18.045 Process with pid 3276706 is not found 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:18.045 00:35:18.045 real 0m17.489s 00:35:18.045 user 0m38.022s 00:35:18.045 sys 0m0.788s 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:18.045 04:46:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:18.045 ************************************ 00:35:18.045 END TEST spdkcli_nvmf_tcp 00:35:18.045 ************************************ 00:35:18.045 04:46:31 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:18.045 04:46:31 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:35:18.045 04:46:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:18.045 04:46:31 -- common/autotest_common.sh@10 -- # set +x 00:35:18.045 ************************************ 00:35:18.045 START TEST nvmf_identify_passthru 00:35:18.045 ************************************ 00:35:18.045 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:18.045 * Looking for test storage... 00:35:18.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:18.045 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:18.045 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:18.045 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:35:18.045 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:18.045 04:46:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:18.046 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:18.046 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:18.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.046 --rc genhtml_branch_coverage=1 00:35:18.046 --rc genhtml_function_coverage=1 00:35:18.046 --rc genhtml_legend=1 00:35:18.046 --rc geninfo_all_blocks=1 00:35:18.046 --rc geninfo_unexecuted_blocks=1 00:35:18.046 00:35:18.046 ' 00:35:18.046 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:18.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.046 --rc genhtml_branch_coverage=1 00:35:18.046 --rc genhtml_function_coverage=1 00:35:18.046 --rc genhtml_legend=1 00:35:18.046 --rc geninfo_all_blocks=1 00:35:18.046 --rc geninfo_unexecuted_blocks=1 00:35:18.046 00:35:18.046 ' 00:35:18.046 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:18.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.046 --rc genhtml_branch_coverage=1 00:35:18.046 --rc genhtml_function_coverage=1 00:35:18.046 --rc genhtml_legend=1 00:35:18.046 --rc geninfo_all_blocks=1 00:35:18.046 --rc geninfo_unexecuted_blocks=1 00:35:18.046 00:35:18.046 ' 00:35:18.046 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:18.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:18.046 --rc genhtml_branch_coverage=1 00:35:18.046 --rc genhtml_function_coverage=1 00:35:18.046 --rc genhtml_legend=1 00:35:18.046 --rc geninfo_all_blocks=1 00:35:18.046 --rc geninfo_unexecuted_blocks=1 00:35:18.046 00:35:18.046 ' 00:35:18.046 04:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:18.046 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:18.046 04:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:18.046 04:46:31 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:18.046 04:46:31 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:18.046 04:46:31 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:18.046 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:18.046 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:18.046 04:46:31 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:35:18.047 04:46:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:35:26.176 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:26.177 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:26.177 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:26.177 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:26.177 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:26.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:26.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:35:26.177 00:35:26.177 --- 10.0.0.2 ping statistics --- 00:35:26.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.177 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:26.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:26.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:35:26.177 00:35:26.177 --- 10.0.0.1 ping statistics --- 00:35:26.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:26.177 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:26.177 04:46:38 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:26.177 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:26.177 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:26.177 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.177 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:26.177 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:35:26.177 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:65:00.0 00:35:26.178 04:46:39 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:65:00.0 00:35:26.178 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:35:26.178 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:35:26.178 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:26.178 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:26.178 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:35:26.178 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:35:26.178 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:35:26.178 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:26.178 04:46:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:26.749 04:46:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:35:26.749 04:46:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.749 04:46:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.749 04:46:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3283976 00:35:26.749 04:46:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:26.749 04:46:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:26.749 04:46:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3283976 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 3283976 ']' 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:26.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:26.749 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:26.749 [2024-11-05 04:46:40.194889] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:35:26.749 [2024-11-05 04:46:40.194950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:26.749 [2024-11-05 04:46:40.274267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:26.749 [2024-11-05 04:46:40.314464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.749 [2024-11-05 04:46:40.314501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.749 [2024-11-05 04:46:40.314510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.749 [2024-11-05 04:46:40.314517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.749 [2024-11-05 04:46:40.314523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.750 [2024-11-05 04:46:40.316225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.750 [2024-11-05 04:46:40.316341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:26.750 [2024-11-05 04:46:40.316497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.750 [2024-11-05 04:46:40.316497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:27.692 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:27.692 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:35:27.692 04:46:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:27.692 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.692 04:46:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:27.692 INFO: Log level set to 20 00:35:27.692 INFO: Requests: 00:35:27.692 { 00:35:27.692 "jsonrpc": "2.0", 00:35:27.692 "method": "nvmf_set_config", 00:35:27.692 "id": 1, 00:35:27.692 "params": { 00:35:27.692 "admin_cmd_passthru": { 00:35:27.692 "identify_ctrlr": true 00:35:27.692 } 00:35:27.692 } 00:35:27.692 } 00:35:27.692 00:35:27.692 INFO: response: 00:35:27.692 { 00:35:27.692 "jsonrpc": "2.0", 00:35:27.692 "id": 1, 00:35:27.692 "result": true 00:35:27.692 } 00:35:27.692 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.692 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:27.692 INFO: Setting log level to 20 00:35:27.692 INFO: Setting log level to 20 00:35:27.692 INFO: Log level set to 20 00:35:27.692 INFO: Log level set to 20 00:35:27.692 INFO: Requests: 00:35:27.692 { 00:35:27.692 "jsonrpc": "2.0", 00:35:27.692 "method": "framework_start_init", 00:35:27.692 "id": 1 00:35:27.692 } 00:35:27.692 00:35:27.692 INFO: Requests: 00:35:27.692 { 00:35:27.692 "jsonrpc": "2.0", 00:35:27.692 "method": "framework_start_init", 00:35:27.692 "id": 1 00:35:27.692 } 00:35:27.692 00:35:27.692 [2024-11-05 04:46:41.064507] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:27.692 INFO: response: 00:35:27.692 { 00:35:27.692 "jsonrpc": "2.0", 00:35:27.692 "id": 1, 00:35:27.692 "result": true 00:35:27.692 } 00:35:27.692 00:35:27.692 INFO: response: 00:35:27.692 { 00:35:27.692 "jsonrpc": "2.0", 00:35:27.692 "id": 1, 00:35:27.692 "result": true 00:35:27.692 } 00:35:27.692 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.692 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:27.692 INFO: Setting log level to 40 00:35:27.692 INFO: Setting log level to 40 00:35:27.692 INFO: Setting log level to 40 00:35:27.692 [2024-11-05 04:46:41.077847] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.692 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:27.692 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.692 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:27.954 Nvme0n1 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.954 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.954 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.954 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:27.954 [2024-11-05 04:46:41.465000] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.954 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:27.954 [ 00:35:27.954 { 00:35:27.954 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:27.954 "subtype": "Discovery", 00:35:27.954 "listen_addresses": [], 00:35:27.954 "allow_any_host": true, 00:35:27.954 "hosts": [] 00:35:27.954 }, 00:35:27.954 { 00:35:27.954 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:27.954 "subtype": "NVMe", 00:35:27.954 "listen_addresses": [ 00:35:27.954 { 00:35:27.954 "trtype": "TCP", 00:35:27.954 "adrfam": "IPv4", 00:35:27.954 "traddr": "10.0.0.2", 00:35:27.954 "trsvcid": "4420" 00:35:27.954 } 00:35:27.954 ], 00:35:27.954 "allow_any_host": true, 00:35:27.954 "hosts": [], 00:35:27.954 "serial_number": "SPDK00000000000001", 00:35:27.954 "model_number": "SPDK bdev Controller", 00:35:27.954 "max_namespaces": 1, 00:35:27.954 "min_cntlid": 1, 00:35:27.954 "max_cntlid": 65519, 00:35:27.954 "namespaces": [ 00:35:27.954 { 00:35:27.954 "nsid": 1, 00:35:27.954 "bdev_name": "Nvme0n1", 00:35:27.954 "name": "Nvme0n1", 00:35:27.954 "nguid": "36344730526054870025384500000044", 00:35:27.954 "uuid": "36344730-5260-5487-0025-384500000044" 00:35:27.954 } 00:35:27.954 ] 00:35:27.954 } 00:35:27.954 ] 00:35:27.954 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:27.954 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:27.954 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:27.954 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:28.215 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:28.215 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:28.215 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:28.215 04:46:41 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:28.215 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:28.215 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:35:28.215 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:28.215 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:35:28.215 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:28.215 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:28.475 rmmod nvme_tcp 00:35:28.475 rmmod nvme_fabrics 00:35:28.475 rmmod nvme_keyring 00:35:28.475 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:28.475 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:35:28.475 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:35:28.475 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 3283976 ']' 00:35:28.475 04:46:41 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 3283976 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 3283976 ']' 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 3283976 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3283976 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3283976' 00:35:28.475 killing process with pid 3283976 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 3283976 00:35:28.475 04:46:41 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 3283976 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:28.735 04:46:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.735 04:46:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:28.735 04:46:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.286 04:46:44 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:31.286 00:35:31.286 real 0m12.908s 00:35:31.286 user 0m9.823s 00:35:31.286 sys 0m6.611s 00:35:31.286 04:46:44 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:31.286 04:46:44 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:31.286 ************************************ 00:35:31.286 END TEST nvmf_identify_passthru 00:35:31.286 ************************************ 00:35:31.286 04:46:44 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:31.286 04:46:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:31.286 04:46:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:31.286 04:46:44 -- common/autotest_common.sh@10 -- # set +x 00:35:31.286 ************************************ 00:35:31.286 START TEST nvmf_dif 00:35:31.286 ************************************ 00:35:31.286 04:46:44 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:31.286 * Looking for test storage... 00:35:31.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:31.286 04:46:44 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:31.286 04:46:44 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:35:31.286 04:46:44 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:31.286 04:46:44 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:31.286 04:46:44 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:35:31.286 04:46:44 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:31.286 04:46:44 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:31.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.286 --rc genhtml_branch_coverage=1 00:35:31.286 --rc genhtml_function_coverage=1 00:35:31.286 --rc genhtml_legend=1 00:35:31.286 --rc geninfo_all_blocks=1 00:35:31.286 --rc geninfo_unexecuted_blocks=1 00:35:31.286 00:35:31.286 ' 00:35:31.286 04:46:44 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:31.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.286 --rc genhtml_branch_coverage=1 00:35:31.286 --rc genhtml_function_coverage=1 00:35:31.286 --rc genhtml_legend=1 00:35:31.286 --rc geninfo_all_blocks=1 00:35:31.286 --rc geninfo_unexecuted_blocks=1 00:35:31.286 00:35:31.286 ' 00:35:31.286 04:46:44 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:31.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.286 --rc genhtml_branch_coverage=1 00:35:31.286 --rc genhtml_function_coverage=1 00:35:31.286 --rc genhtml_legend=1 00:35:31.286 --rc geninfo_all_blocks=1 00:35:31.286 --rc geninfo_unexecuted_blocks=1 00:35:31.286 00:35:31.286 ' 00:35:31.287 04:46:44 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:31.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:31.287 --rc genhtml_branch_coverage=1 00:35:31.287 --rc genhtml_function_coverage=1 00:35:31.287 --rc genhtml_legend=1 00:35:31.287 --rc geninfo_all_blocks=1 00:35:31.287 --rc geninfo_unexecuted_blocks=1 00:35:31.287 00:35:31.287 ' 00:35:31.287 04:46:44 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:31.287 04:46:44 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:35:31.287 04:46:44 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:31.287 04:46:44 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:31.287 04:46:44 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:31.287 04:46:44 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.287 04:46:44 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.287 04:46:44 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.287 04:46:44 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:31.287 04:46:44 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:31.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:31.287 04:46:44 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:31.287 04:46:44 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:31.287 04:46:44 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:31.288 04:46:44 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:31.288 04:46:44 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:31.288 04:46:44 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:31.288 04:46:44 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:31.288 04:46:44 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:31.288 04:46:44 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:31.288 04:46:44 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:31.288 04:46:44 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:31.288 04:46:44 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:31.288 04:46:44 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:31.288 04:46:44 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:31.288 04:46:44 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:31.288 04:46:44 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:31.288 04:46:44 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:35:31.288 04:46:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:37.878 04:46:51 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:37.879 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:37.879 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:37.879 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:37.879 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:37.879 04:46:51 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:38.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:38.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:35:38.139 00:35:38.139 --- 10.0.0.2 ping statistics --- 00:35:38.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.139 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:38.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:38.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:35:38.139 00:35:38.139 --- 10.0.0.1 ping statistics --- 00:35:38.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:38.139 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:38.139 04:46:51 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:41.437 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:35:41.437 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:35:41.437 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:42.008 04:46:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:42.008 04:46:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:42.008 04:46:55 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:35:42.008 04:46:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=3289924 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 3289924 00:35:42.008 04:46:55 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:42.008 04:46:55 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 3289924 ']' 00:35:42.008 04:46:55 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:42.008 04:46:55 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:35:42.008 04:46:55 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:42.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:42.008 04:46:55 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:35:42.008 04:46:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.008 [2024-11-05 04:46:55.518512] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:35:42.008 [2024-11-05 04:46:55.518570] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:42.008 [2024-11-05 04:46:55.599041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:42.008 [2024-11-05 04:46:55.635832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:42.008 [2024-11-05 04:46:55.635865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:42.008 [2024-11-05 04:46:55.635875] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:42.008 [2024-11-05 04:46:55.635884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:42.008 [2024-11-05 04:46:55.635890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:42.008 [2024-11-05 04:46:55.636459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:35:42.949 04:46:56 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.949 04:46:56 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:42.949 04:46:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:42.949 04:46:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.949 [2024-11-05 04:46:56.364843] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.949 04:46:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:42.949 04:46:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.949 ************************************ 00:35:42.949 START TEST fio_dif_1_default 00:35:42.949 ************************************ 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.949 bdev_null0 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:42.949 [2024-11-05 04:46:56.433170] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:42.949 04:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:42.949 { 00:35:42.949 "params": { 00:35:42.949 "name": "Nvme$subsystem", 00:35:42.949 "trtype": "$TEST_TRANSPORT", 00:35:42.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.949 "adrfam": "ipv4", 00:35:42.949 "trsvcid": "$NVMF_PORT", 00:35:42.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.949 "hdgst": ${hdgst:-false}, 00:35:42.949 "ddgst": ${ddgst:-false} 00:35:42.949 }, 00:35:42.949 "method": "bdev_nvme_attach_controller" 00:35:42.949 } 00:35:42.949 EOF 00:35:42.949 )") 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:42.950 "params": { 00:35:42.950 "name": "Nvme0", 00:35:42.950 "trtype": "tcp", 00:35:42.950 "traddr": "10.0.0.2", 00:35:42.950 "adrfam": "ipv4", 00:35:42.950 "trsvcid": "4420", 00:35:42.950 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.950 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.950 "hdgst": false, 00:35:42.950 "ddgst": false 00:35:42.950 }, 00:35:42.950 "method": "bdev_nvme_attach_controller" 00:35:42.950 }' 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:42.950 04:46:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.518 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:43.518 fio-3.35 00:35:43.518 Starting 1 thread 00:35:55.747 00:35:55.747 filename0: (groupid=0, jobs=1): err= 0: pid=3290508: Tue Nov 5 04:47:07 2024 00:35:55.747 read: IOPS=188, BW=756KiB/s (774kB/s)(7568KiB/10013msec) 00:35:55.747 slat (nsec): min=5384, max=32222, avg=6232.75, stdev=1462.95 00:35:55.747 clat (usec): min=690, max=44722, avg=21151.89, stdev=20183.57 00:35:55.747 lat (usec): min=698, max=44754, avg=21158.13, stdev=20183.56 00:35:55.747 clat percentiles (usec): 00:35:55.747 | 1.00th=[ 857], 5.00th=[ 898], 10.00th=[ 914], 20.00th=[ 930], 00:35:55.747 | 30.00th=[ 938], 40.00th=[ 955], 50.00th=[41157], 60.00th=[41157], 00:35:55.747 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:55.747 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:35:55.747 | 99.99th=[44827] 00:35:55.747 bw ( KiB/s): min= 704, max= 768, per=99.89%, avg=755.20, stdev=26.27, samples=20 00:35:55.747 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:35:55.747 lat (usec) : 750=0.63%, 1000=47.62% 00:35:55.747 lat (msec) : 2=1.64%, 50=50.11% 00:35:55.747 cpu : usr=93.78%, sys=6.01%, ctx=13, majf=0, minf=239 00:35:55.747 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.747 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.747 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:55.747 00:35:55.747 Run status group 0 (all jobs): 00:35:55.747 READ: bw=756KiB/s (774kB/s), 756KiB/s-756KiB/s (774kB/s-774kB/s), io=7568KiB (7750kB), run=10013-10013msec 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.747 00:35:55.747 real 0m11.240s 00:35:55.747 user 0m24.579s 00:35:55.747 sys 0m0.916s 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:55.747 ************************************ 00:35:55.747 END TEST fio_dif_1_default 00:35:55.747 ************************************ 00:35:55.747 04:47:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:55.747 04:47:07 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:55.747 04:47:07 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:55.747 04:47:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:55.747 ************************************ 00:35:55.747 START TEST fio_dif_1_multi_subsystems 00:35:55.747 ************************************ 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:55.747 bdev_null0 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.747 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:55.748 [2024-11-05 04:47:07.770015] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:55.748 bdev_null1 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.748 { 00:35:55.748 "params": { 00:35:55.748 "name": "Nvme$subsystem", 00:35:55.748 "trtype": "$TEST_TRANSPORT", 00:35:55.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.748 "adrfam": "ipv4", 00:35:55.748 "trsvcid": "$NVMF_PORT", 00:35:55.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.748 "hdgst": ${hdgst:-false}, 00:35:55.748 "ddgst": ${ddgst:-false} 00:35:55.748 }, 00:35:55.748 "method": "bdev_nvme_attach_controller" 00:35:55.748 } 00:35:55.748 EOF 00:35:55.748 )") 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:55.748 { 00:35:55.748 "params": { 00:35:55.748 "name": "Nvme$subsystem", 00:35:55.748 "trtype": "$TEST_TRANSPORT", 00:35:55.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:55.748 "adrfam": "ipv4", 00:35:55.748 "trsvcid": "$NVMF_PORT", 00:35:55.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:55.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:55.748 "hdgst": ${hdgst:-false}, 00:35:55.748 "ddgst": ${ddgst:-false} 00:35:55.748 }, 00:35:55.748 "method": "bdev_nvme_attach_controller" 00:35:55.748 } 00:35:55.748 EOF 00:35:55.748 )") 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:55.748 "params": { 00:35:55.748 "name": "Nvme0", 00:35:55.748 "trtype": "tcp", 00:35:55.748 "traddr": "10.0.0.2", 00:35:55.748 "adrfam": "ipv4", 00:35:55.748 "trsvcid": "4420", 00:35:55.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:55.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:55.748 "hdgst": false, 00:35:55.748 "ddgst": false 00:35:55.748 }, 00:35:55.748 "method": "bdev_nvme_attach_controller" 00:35:55.748 },{ 00:35:55.748 "params": { 00:35:55.748 "name": "Nvme1", 00:35:55.748 "trtype": "tcp", 00:35:55.748 "traddr": "10.0.0.2", 00:35:55.748 "adrfam": "ipv4", 00:35:55.748 "trsvcid": "4420", 00:35:55.748 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:55.748 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:55.748 "hdgst": false, 00:35:55.748 "ddgst": false 00:35:55.748 }, 00:35:55.748 "method": "bdev_nvme_attach_controller" 00:35:55.748 }' 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:55.748 04:47:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:55.748 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:55.748 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:55.748 fio-3.35 00:35:55.748 Starting 2 threads 00:36:05.745 00:36:05.745 filename0: (groupid=0, jobs=1): err= 0: pid=3292883: Tue Nov 5 04:47:18 2024 00:36:05.745 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10041msec) 00:36:05.745 slat (nsec): min=5385, max=31301, avg=6343.89, stdev=1935.40 00:36:05.745 clat (usec): min=602, max=42360, avg=21033.10, stdev=20226.68 00:36:05.745 lat (usec): min=607, max=42366, avg=21039.44, stdev=20226.52 00:36:05.745 clat percentiles (usec): 00:36:05.745 | 1.00th=[ 619], 5.00th=[ 627], 10.00th=[ 644], 20.00th=[ 799], 00:36:05.745 | 30.00th=[ 824], 40.00th=[ 840], 50.00th=[40633], 60.00th=[41157], 00:36:05.745 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:36:05.745 | 99.00th=[41157], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:05.745 | 99.99th=[42206] 00:36:05.745 bw ( KiB/s): min= 704, max= 768, per=66.42%, avg=761.60, stdev=19.70, samples=20 00:36:05.745 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:36:05.745 lat (usec) : 750=15.62%, 1000=34.07% 00:36:05.745 lat (msec) : 2=0.21%, 50=50.10% 00:36:05.745 cpu : usr=95.89%, sys=3.90%, ctx=12, majf=0, minf=114 00:36:05.745 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.745 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.745 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:05.745 filename1: (groupid=0, jobs=1): err= 0: pid=3292884: Tue Nov 5 04:47:18 2024 00:36:05.745 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10001msec) 00:36:05.745 slat (nsec): min=5385, max=43359, avg=6647.16, stdev=2725.14 00:36:05.745 clat (usec): min=40773, max=42647, avg=41305.09, stdev=457.44 00:36:05.745 lat (usec): min=40778, max=42653, avg=41311.74, stdev=458.09 00:36:05.745 clat percentiles (usec): 00:36:05.745 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:36:05.745 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:36:05.745 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:36:05.745 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:36:05.745 | 99.99th=[42730] 00:36:05.745 bw ( KiB/s): min= 352, max= 416, per=33.60%, avg=385.68, stdev=12.95, samples=19 00:36:05.745 iops : min= 88, max= 104, avg=96.42, stdev= 3.24, samples=19 00:36:05.745 lat (msec) : 50=100.00% 00:36:05.745 cpu : usr=94.99%, sys=4.79%, ctx=14, majf=0, minf=155 00:36:05.745 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:05.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.745 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:05.745 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:05.745 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:05.745 00:36:05.745 Run status group 0 (all jobs): 00:36:05.745 READ: bw=1146KiB/s (1173kB/s), 387KiB/s-760KiB/s (396kB/s-778kB/s), io=11.2MiB (11.8MB), run=10001-10041msec 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:05.745 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.746 00:36:05.746 real 0m11.442s 00:36:05.746 user 0m34.243s 00:36:05.746 sys 0m1.230s 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:05.746 04:47:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:05.746 ************************************ 00:36:05.746 END TEST fio_dif_1_multi_subsystems 00:36:05.746 ************************************ 00:36:05.746 04:47:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:05.746 04:47:19 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:05.746 04:47:19 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:05.746 04:47:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:05.746 ************************************ 00:36:05.746 START TEST fio_dif_rand_params 00:36:05.746 ************************************ 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.746 bdev_null0 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:05.746 [2024-11-05 04:47:19.299884] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:05.746 { 00:36:05.746 "params": { 00:36:05.746 "name": "Nvme$subsystem", 00:36:05.746 "trtype": "$TEST_TRANSPORT", 00:36:05.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:05.746 "adrfam": "ipv4", 00:36:05.746 "trsvcid": "$NVMF_PORT", 00:36:05.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:05.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:05.746 "hdgst": ${hdgst:-false}, 00:36:05.746 "ddgst": ${ddgst:-false} 00:36:05.746 }, 00:36:05.746 "method": "bdev_nvme_attach_controller" 00:36:05.746 } 00:36:05.746 EOF 00:36:05.746 )") 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:05.746 "params": { 00:36:05.746 "name": "Nvme0", 00:36:05.746 "trtype": "tcp", 00:36:05.746 "traddr": "10.0.0.2", 00:36:05.746 "adrfam": "ipv4", 00:36:05.746 "trsvcid": "4420", 00:36:05.746 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:05.746 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:05.746 "hdgst": false, 00:36:05.746 "ddgst": false 00:36:05.746 }, 00:36:05.746 "method": "bdev_nvme_attach_controller" 00:36:05.746 }' 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:05.746 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:06.031 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:06.031 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:06.031 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:06.031 04:47:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:06.294 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:06.294 ... 00:36:06.294 fio-3.35 00:36:06.294 Starting 3 threads 00:36:12.878 00:36:12.878 filename0: (groupid=0, jobs=1): err= 0: pid=3295080: Tue Nov 5 04:47:25 2024 00:36:12.878 read: IOPS=235, BW=29.5MiB/s (30.9MB/s)(149MiB/5047msec) 00:36:12.878 slat (nsec): min=5637, max=30612, avg=8097.67, stdev=1453.31 00:36:12.878 clat (usec): min=6410, max=55338, avg=12676.99, stdev=5489.93 00:36:12.878 lat (usec): min=6419, max=55347, avg=12685.08, stdev=5490.08 00:36:12.878 clat percentiles (usec): 00:36:12.879 | 1.00th=[ 7308], 5.00th=[ 8160], 10.00th=[ 8979], 20.00th=[10028], 00:36:12.879 | 30.00th=[10683], 40.00th=[11338], 50.00th=[12256], 60.00th=[12911], 00:36:12.879 | 70.00th=[13566], 80.00th=[14222], 90.00th=[15008], 95.00th=[15664], 00:36:12.879 | 99.00th=[50594], 99.50th=[52167], 99.90th=[54789], 99.95th=[55313], 00:36:12.879 | 99.99th=[55313] 00:36:12.879 bw ( KiB/s): min=23808, max=34304, per=33.17%, avg=30387.20, stdev=2961.08, samples=10 00:36:12.879 iops : min= 186, max= 268, avg=237.40, stdev=23.13, samples=10 00:36:12.879 lat (msec) : 10=20.25%, 20=78.07%, 50=0.59%, 100=1.09% 00:36:12.879 cpu : usr=94.79%, sys=4.99%, ctx=7, majf=0, minf=91 00:36:12.879 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.879 issued rwts: total=1190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:12.879 filename0: (groupid=0, jobs=1): err= 0: pid=3295081: Tue Nov 5 04:47:25 2024 00:36:12.879 read: IOPS=237, BW=29.7MiB/s (31.2MB/s)(150MiB/5047msec) 00:36:12.879 slat (nsec): min=5417, max=33076, avg=6178.83, stdev=1503.03 00:36:12.879 clat (usec): min=6100, max=53581, avg=12561.94, stdev=5920.75 00:36:12.879 lat (usec): min=6108, max=53587, avg=12568.12, stdev=5920.72 00:36:12.879 clat percentiles (usec): 00:36:12.879 | 1.00th=[ 7308], 5.00th=[ 7898], 10.00th=[ 8717], 20.00th=[10028], 00:36:12.879 | 30.00th=[10683], 40.00th=[11338], 50.00th=[11863], 60.00th=[12649], 00:36:12.879 | 70.00th=[13173], 80.00th=[13698], 90.00th=[14484], 95.00th=[15139], 00:36:12.879 | 99.00th=[49546], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:36:12.879 | 99.99th=[53740] 00:36:12.879 bw ( KiB/s): min=23296, max=32512, per=33.48%, avg=30668.80, stdev=2657.14, samples=10 00:36:12.879 iops : min= 182, max= 254, avg=239.60, stdev=20.76, samples=10 00:36:12.879 lat (msec) : 10=19.98%, 20=77.85%, 50=1.17%, 100=1.00% 00:36:12.879 cpu : usr=94.75%, sys=5.03%, ctx=15, majf=0, minf=85 00:36:12.879 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.879 issued rwts: total=1201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:12.879 filename0: (groupid=0, jobs=1): err= 0: pid=3295082: Tue Nov 5 04:47:25 2024 00:36:12.879 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(153MiB/5047msec) 00:36:12.879 slat (nsec): min=5401, max=32446, avg=6110.29, stdev=1480.04 00:36:12.879 clat (usec): min=5640, max=92858, avg=12356.14, stdev=8350.13 00:36:12.879 lat (usec): min=5645, max=92864, avg=12362.25, stdev=8350.18 00:36:12.879 clat percentiles (usec): 00:36:12.879 | 1.00th=[ 7046], 5.00th=[ 8029], 10.00th=[ 8848], 20.00th=[ 9503], 00:36:12.879 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:36:12.879 | 70.00th=[11600], 80.00th=[12387], 90.00th=[13435], 95.00th=[14615], 00:36:12.879 | 99.00th=[51119], 99.50th=[53216], 99.90th=[90702], 99.95th=[92799], 00:36:12.879 | 99.99th=[92799] 00:36:12.879 bw ( KiB/s): min=26112, max=37376, per=34.04%, avg=31180.80, stdev=3750.40, samples=10 00:36:12.879 iops : min= 204, max= 292, avg=243.60, stdev=29.30, samples=10 00:36:12.879 lat (msec) : 10=31.37%, 20=64.70%, 50=1.72%, 100=2.21% 00:36:12.879 cpu : usr=95.18%, sys=4.58%, ctx=9, majf=0, minf=83 00:36:12.879 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:12.879 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.879 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:12.879 issued rwts: total=1221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:12.879 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:12.879 00:36:12.879 Run status group 0 (all jobs): 00:36:12.879 READ: bw=89.5MiB/s (93.8MB/s), 29.5MiB/s-30.2MiB/s (30.9MB/s-31.7MB/s), io=452MiB (473MB), run=5047-5047msec 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 bdev_null0 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 [2024-11-05 04:47:25.610106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 bdev_null1 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 bdev_null2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.879 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:12.880 { 00:36:12.880 "params": { 00:36:12.880 "name": "Nvme$subsystem", 00:36:12.880 "trtype": "$TEST_TRANSPORT", 00:36:12.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:12.880 "adrfam": "ipv4", 00:36:12.880 "trsvcid": "$NVMF_PORT", 00:36:12.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:12.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:12.880 "hdgst": ${hdgst:-false}, 00:36:12.880 "ddgst": ${ddgst:-false} 00:36:12.880 }, 00:36:12.880 "method": "bdev_nvme_attach_controller" 00:36:12.880 } 00:36:12.880 EOF 00:36:12.880 )") 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:12.880 { 00:36:12.880 "params": { 00:36:12.880 "name": "Nvme$subsystem", 00:36:12.880 "trtype": "$TEST_TRANSPORT", 00:36:12.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:12.880 "adrfam": "ipv4", 00:36:12.880 "trsvcid": "$NVMF_PORT", 00:36:12.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:12.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:12.880 "hdgst": ${hdgst:-false}, 00:36:12.880 "ddgst": ${ddgst:-false} 00:36:12.880 }, 00:36:12.880 "method": "bdev_nvme_attach_controller" 00:36:12.880 } 00:36:12.880 EOF 00:36:12.880 )") 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:12.880 { 00:36:12.880 "params": { 00:36:12.880 "name": "Nvme$subsystem", 00:36:12.880 "trtype": "$TEST_TRANSPORT", 00:36:12.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:12.880 "adrfam": "ipv4", 00:36:12.880 "trsvcid": "$NVMF_PORT", 00:36:12.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:12.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:12.880 "hdgst": ${hdgst:-false}, 00:36:12.880 "ddgst": ${ddgst:-false} 00:36:12.880 }, 00:36:12.880 "method": "bdev_nvme_attach_controller" 00:36:12.880 } 00:36:12.880 EOF 00:36:12.880 )") 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:12.880 "params": { 00:36:12.880 "name": "Nvme0", 00:36:12.880 "trtype": "tcp", 00:36:12.880 "traddr": "10.0.0.2", 00:36:12.880 "adrfam": "ipv4", 00:36:12.880 "trsvcid": "4420", 00:36:12.880 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:12.880 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:12.880 "hdgst": false, 00:36:12.880 "ddgst": false 00:36:12.880 }, 00:36:12.880 "method": "bdev_nvme_attach_controller" 00:36:12.880 },{ 00:36:12.880 "params": { 00:36:12.880 "name": "Nvme1", 00:36:12.880 "trtype": "tcp", 00:36:12.880 "traddr": "10.0.0.2", 00:36:12.880 "adrfam": "ipv4", 00:36:12.880 "trsvcid": "4420", 00:36:12.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:12.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:12.880 "hdgst": false, 00:36:12.880 "ddgst": false 00:36:12.880 }, 00:36:12.880 "method": "bdev_nvme_attach_controller" 00:36:12.880 },{ 00:36:12.880 "params": { 00:36:12.880 "name": "Nvme2", 00:36:12.880 "trtype": "tcp", 00:36:12.880 "traddr": "10.0.0.2", 00:36:12.880 "adrfam": "ipv4", 00:36:12.880 "trsvcid": "4420", 00:36:12.880 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:12.880 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:12.880 "hdgst": false, 00:36:12.880 "ddgst": false 00:36:12.880 }, 00:36:12.880 "method": "bdev_nvme_attach_controller" 00:36:12.880 }' 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:12.880 04:47:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:12.880 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:12.880 ... 00:36:12.880 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:12.880 ... 00:36:12.880 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:12.880 ... 00:36:12.880 fio-3.35 00:36:12.880 Starting 24 threads 00:36:25.120 00:36:25.120 filename0: (groupid=0, jobs=1): err= 0: pid=3296588: Tue Nov 5 04:47:37 2024 00:36:25.120 read: IOPS=627, BW=2510KiB/s (2570kB/s)(24.6MiB/10052msec) 00:36:25.120 slat (nsec): min=5562, max=97798, avg=7134.62, stdev=3180.61 00:36:25.120 clat (usec): min=4077, max=55284, avg=25375.04, stdev=5311.37 00:36:25.120 lat (usec): min=4097, max=55291, avg=25382.17, stdev=5311.73 00:36:25.120 clat percentiles (usec): 00:36:25.120 | 1.00th=[14746], 5.00th=[17957], 10.00th=[20841], 20.00th=[22152], 00:36:25.120 | 30.00th=[22414], 40.00th=[22938], 50.00th=[23200], 60.00th=[24249], 00:36:25.120 | 70.00th=[28181], 80.00th=[32637], 90.00th=[32900], 95.00th=[33424], 00:36:25.120 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[55313], 00:36:25.120 | 99.99th=[55313] 00:36:25.120 bw ( KiB/s): min= 1916, max= 2864, per=5.37%, avg=2520.00, stdev=332.15, samples=20 00:36:25.120 iops : min= 479, max= 716, avg=629.95, stdev=83.01, samples=20 00:36:25.120 lat (msec) : 10=0.76%, 20=6.21%, 50=92.93%, 100=0.10% 00:36:25.120 cpu : usr=98.82%, sys=0.85%, ctx=56, majf=0, minf=11 00:36:25.120 IO depths : 1=2.1%, 2=4.3%, 4=12.5%, 8=70.4%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:25.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 complete : 0=0.0%, 4=90.5%, 8=4.2%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 issued rwts: total=6308,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.120 filename0: (groupid=0, jobs=1): err= 0: pid=3296589: Tue Nov 5 04:47:37 2024 00:36:25.120 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10017msec) 00:36:25.120 slat (nsec): min=5235, max=88815, avg=20526.50, stdev=13980.83 00:36:25.120 clat (usec): min=22968, max=45807, avg=33657.66, stdev=1583.51 00:36:25.120 lat (usec): min=22979, max=45814, avg=33678.18, stdev=1580.61 00:36:25.120 clat percentiles (usec): 00:36:25.120 | 1.00th=[29492], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.120 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.120 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.120 | 99.00th=[35914], 99.50th=[43779], 99.90th=[44303], 99.95th=[45351], 00:36:25.120 | 99.99th=[45876] 00:36:25.120 bw ( KiB/s): min= 1788, max= 2048, per=4.03%, avg=1892.42, stdev=68.69, samples=19 00:36:25.120 iops : min= 447, max= 512, avg=473.11, stdev=17.17, samples=19 00:36:25.120 lat (msec) : 50=100.00% 00:36:25.120 cpu : usr=99.03%, sys=0.64%, ctx=74, majf=0, minf=9 00:36:25.120 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:25.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.120 filename0: (groupid=0, jobs=1): err= 0: pid=3296590: Tue Nov 5 04:47:37 2024 00:36:25.120 read: IOPS=476, BW=1907KiB/s (1952kB/s)(18.6MiB/10003msec) 00:36:25.120 slat (nsec): min=5597, max=72269, avg=15612.79, stdev=11586.42 00:36:25.120 clat (usec): min=9539, max=44339, avg=33440.41, stdev=2594.91 00:36:25.120 lat (usec): min=9552, max=44351, avg=33456.02, stdev=2593.94 00:36:25.120 clat percentiles (usec): 00:36:25.120 | 1.00th=[18220], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.120 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.120 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.120 | 99.00th=[36439], 99.50th=[36963], 99.90th=[36963], 99.95th=[36963], 00:36:25.120 | 99.99th=[44303] 00:36:25.120 bw ( KiB/s): min= 1788, max= 2176, per=4.06%, avg=1905.89, stdev=94.61, samples=19 00:36:25.120 iops : min= 447, max= 544, avg=476.47, stdev=23.65, samples=19 00:36:25.120 lat (msec) : 10=0.34%, 20=1.01%, 50=98.66% 00:36:25.120 cpu : usr=99.01%, sys=0.67%, ctx=64, majf=0, minf=10 00:36:25.120 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.120 filename0: (groupid=0, jobs=1): err= 0: pid=3296591: Tue Nov 5 04:47:37 2024 00:36:25.120 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10022msec) 00:36:25.120 slat (nsec): min=5600, max=77691, avg=13996.38, stdev=11651.87 00:36:25.120 clat (usec): min=17362, max=46670, avg=33669.38, stdev=1959.53 00:36:25.120 lat (usec): min=17367, max=46677, avg=33683.38, stdev=1955.80 00:36:25.120 clat percentiles (usec): 00:36:25.120 | 1.00th=[23725], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.120 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.120 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.120 | 99.00th=[36439], 99.50th=[42730], 99.90th=[46400], 99.95th=[46400], 00:36:25.120 | 99.99th=[46924] 00:36:25.120 bw ( KiB/s): min= 1792, max= 2048, per=4.03%, avg=1892.37, stdev=80.50, samples=19 00:36:25.120 iops : min= 448, max= 512, avg=473.05, stdev=20.11, samples=19 00:36:25.120 lat (msec) : 20=0.13%, 50=99.87% 00:36:25.120 cpu : usr=98.80%, sys=0.90%, ctx=12, majf=0, minf=9 00:36:25.120 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:36:25.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 issued rwts: total=4742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.120 filename0: (groupid=0, jobs=1): err= 0: pid=3296592: Tue Nov 5 04:47:37 2024 00:36:25.120 read: IOPS=485, BW=1943KiB/s (1990kB/s)(19.0MiB/10018msec) 00:36:25.120 slat (nsec): min=5558, max=82343, avg=13925.09, stdev=12314.04 00:36:25.120 clat (usec): min=12617, max=52744, avg=32866.24, stdev=4611.59 00:36:25.120 lat (usec): min=12623, max=52763, avg=32880.17, stdev=4611.31 00:36:25.120 clat percentiles (usec): 00:36:25.120 | 1.00th=[20055], 5.00th=[24249], 10.00th=[27132], 20.00th=[30016], 00:36:25.120 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33817], 60.00th=[33817], 00:36:25.120 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35914], 95.00th=[39584], 00:36:25.120 | 99.00th=[47449], 99.50th=[50594], 99.90th=[52691], 99.95th=[52691], 00:36:25.120 | 99.99th=[52691] 00:36:25.120 bw ( KiB/s): min= 1836, max= 2048, per=4.13%, avg=1939.74, stdev=69.20, samples=19 00:36:25.120 iops : min= 459, max= 512, avg=484.89, stdev=17.26, samples=19 00:36:25.120 lat (msec) : 20=0.97%, 50=98.52%, 100=0.51% 00:36:25.120 cpu : usr=98.64%, sys=0.89%, ctx=152, majf=0, minf=9 00:36:25.120 IO depths : 1=0.9%, 2=2.0%, 4=6.1%, 8=76.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:36:25.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 complete : 0=0.0%, 4=89.8%, 8=7.6%, 16=2.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 issued rwts: total=4866,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.120 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.120 filename0: (groupid=0, jobs=1): err= 0: pid=3296593: Tue Nov 5 04:47:37 2024 00:36:25.120 read: IOPS=472, BW=1892KiB/s (1937kB/s)(18.5MiB/10010msec) 00:36:25.120 slat (nsec): min=5605, max=71628, avg=18164.31, stdev=11256.47 00:36:25.120 clat (usec): min=9896, max=65437, avg=33660.08, stdev=2388.67 00:36:25.120 lat (usec): min=9902, max=65456, avg=33678.24, stdev=2387.26 00:36:25.120 clat percentiles (usec): 00:36:25.120 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.120 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.120 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.120 | 99.00th=[36439], 99.50th=[40109], 99.90th=[54789], 99.95th=[54789], 00:36:25.120 | 99.99th=[65274] 00:36:25.120 bw ( KiB/s): min= 1788, max= 2043, per=4.01%, avg=1885.42, stdev=71.41, samples=19 00:36:25.120 iops : min= 447, max= 510, avg=471.32, stdev=17.76, samples=19 00:36:25.120 lat (msec) : 10=0.15%, 20=0.49%, 50=99.03%, 100=0.34% 00:36:25.120 cpu : usr=98.74%, sys=0.87%, ctx=89, majf=0, minf=9 00:36:25.120 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.120 issued rwts: total=4734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.121 filename0: (groupid=0, jobs=1): err= 0: pid=3296594: Tue Nov 5 04:47:37 2024 00:36:25.121 read: IOPS=474, BW=1897KiB/s (1942kB/s)(18.6MiB/10022msec) 00:36:25.121 slat (nsec): min=5579, max=59894, avg=14439.01, stdev=8551.11 00:36:25.121 clat (usec): min=17640, max=36704, avg=33618.29, stdev=1529.26 00:36:25.121 lat (usec): min=17663, max=36711, avg=33632.72, stdev=1527.08 00:36:25.121 clat percentiles (usec): 00:36:25.121 | 1.00th=[27132], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:25.121 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.121 | 70.00th=[34341], 80.00th=[34866], 90.00th=[34866], 95.00th=[35390], 00:36:25.121 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:36:25.121 | 99.99th=[36963] 00:36:25.121 bw ( KiB/s): min= 1788, max= 2048, per=4.04%, avg=1899.16, stdev=77.28, samples=19 00:36:25.121 iops : min= 447, max= 512, avg=474.79, stdev=19.32, samples=19 00:36:25.121 lat (msec) : 20=0.29%, 50=99.71% 00:36:25.121 cpu : usr=99.04%, sys=0.66%, ctx=13, majf=0, minf=9 00:36:25.121 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.121 filename0: (groupid=0, jobs=1): err= 0: pid=3296595: Tue Nov 5 04:47:37 2024 00:36:25.121 read: IOPS=474, BW=1897KiB/s (1942kB/s)(18.6MiB/10022msec) 00:36:25.121 slat (nsec): min=5578, max=84230, avg=15499.25, stdev=10665.98 00:36:25.121 clat (usec): min=17267, max=49766, avg=33591.29, stdev=1607.37 00:36:25.121 lat (usec): min=17276, max=49773, avg=33606.79, stdev=1605.20 00:36:25.121 clat percentiles (usec): 00:36:25.121 | 1.00th=[27132], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.121 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.121 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:25.121 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36439], 99.95th=[36963], 00:36:25.121 | 99.99th=[49546] 00:36:25.121 bw ( KiB/s): min= 1788, max= 2048, per=4.04%, avg=1899.16, stdev=77.28, samples=19 00:36:25.121 iops : min= 447, max= 512, avg=474.79, stdev=19.32, samples=19 00:36:25.121 lat (msec) : 20=0.38%, 50=99.62% 00:36:25.121 cpu : usr=99.08%, sys=0.62%, ctx=13, majf=0, minf=9 00:36:25.121 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.121 filename1: (groupid=0, jobs=1): err= 0: pid=3296596: Tue Nov 5 04:47:37 2024 00:36:25.121 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10019msec) 00:36:25.121 slat (nsec): min=5586, max=92631, avg=17831.94, stdev=14027.97 00:36:25.121 clat (usec): min=22790, max=40879, avg=33576.24, stdev=1551.45 00:36:25.121 lat (usec): min=22798, max=40889, avg=33594.08, stdev=1547.03 00:36:25.121 clat percentiles (usec): 00:36:25.121 | 1.00th=[24249], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.121 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.121 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.121 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:36:25.121 | 99.99th=[40633] 00:36:25.121 bw ( KiB/s): min= 1788, max= 2048, per=4.04%, avg=1898.68, stdev=77.07, samples=19 00:36:25.121 iops : min= 447, max= 512, avg=474.63, stdev=19.19, samples=19 00:36:25.121 lat (msec) : 50=100.00% 00:36:25.121 cpu : usr=99.06%, sys=0.64%, ctx=14, majf=0, minf=9 00:36:25.121 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.121 filename1: (groupid=0, jobs=1): err= 0: pid=3296597: Tue Nov 5 04:47:37 2024 00:36:25.121 read: IOPS=594, BW=2380KiB/s (2437kB/s)(23.4MiB/10052msec) 00:36:25.121 slat (nsec): min=5562, max=74483, avg=8007.09, stdev=4754.42 00:36:25.121 clat (usec): min=4464, max=55012, avg=26764.56, stdev=5699.24 00:36:25.121 lat (usec): min=4471, max=55018, avg=26772.57, stdev=5700.75 00:36:25.121 clat percentiles (usec): 00:36:25.121 | 1.00th=[14877], 5.00th=[18220], 10.00th=[21365], 20.00th=[22414], 00:36:25.121 | 30.00th=[22676], 40.00th=[23462], 50.00th=[24511], 60.00th=[28967], 00:36:25.121 | 70.00th=[32637], 80.00th=[32637], 90.00th=[33817], 95.00th=[33817], 00:36:25.121 | 99.00th=[34866], 99.50th=[35390], 99.90th=[54789], 99.95th=[54789], 00:36:25.121 | 99.99th=[54789] 00:36:25.121 bw ( KiB/s): min= 1916, max= 2752, per=5.08%, avg=2388.80, stdev=365.72, samples=20 00:36:25.121 iops : min= 479, max= 688, avg=597.15, stdev=91.39, samples=20 00:36:25.121 lat (msec) : 10=0.80%, 20=6.15%, 50=92.94%, 100=0.10% 00:36:25.121 cpu : usr=98.98%, sys=0.68%, ctx=43, majf=0, minf=9 00:36:25.121 IO depths : 1=2.2%, 2=4.5%, 4=12.8%, 8=69.9%, 16=10.6%, 32=0.0%, >=64=0.0% 00:36:25.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 complete : 0=0.0%, 4=90.6%, 8=4.1%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 issued rwts: total=5980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.121 filename1: (groupid=0, jobs=1): err= 0: pid=3296598: Tue Nov 5 04:47:37 2024 00:36:25.121 read: IOPS=474, BW=1897KiB/s (1942kB/s)(18.6MiB/10022msec) 00:36:25.121 slat (nsec): min=5569, max=85566, avg=18612.06, stdev=13645.29 00:36:25.121 clat (usec): min=19672, max=55569, avg=33562.47, stdev=1673.74 00:36:25.121 lat (usec): min=19679, max=55579, avg=33581.08, stdev=1670.62 00:36:25.121 clat percentiles (usec): 00:36:25.121 | 1.00th=[24511], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.121 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.121 | 70.00th=[34341], 80.00th=[34866], 90.00th=[34866], 95.00th=[35390], 00:36:25.121 | 99.00th=[36439], 99.50th=[36439], 99.90th=[36963], 99.95th=[49021], 00:36:25.121 | 99.99th=[55313] 00:36:25.121 bw ( KiB/s): min= 1788, max= 2048, per=4.04%, avg=1899.32, stdev=64.93, samples=19 00:36:25.121 iops : min= 447, max= 512, avg=474.79, stdev=16.22, samples=19 00:36:25.121 lat (msec) : 20=0.08%, 50=99.87%, 100=0.04% 00:36:25.121 cpu : usr=98.98%, sys=0.72%, ctx=12, majf=0, minf=9 00:36:25.121 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:25.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.121 filename1: (groupid=0, jobs=1): err= 0: pid=3296599: Tue Nov 5 04:47:37 2024 00:36:25.121 read: IOPS=472, BW=1891KiB/s (1936kB/s)(18.5MiB/10018msec) 00:36:25.121 slat (nsec): min=5594, max=80247, avg=16670.62, stdev=13713.74 00:36:25.121 clat (usec): min=19896, max=50890, avg=33704.20, stdev=2221.04 00:36:25.121 lat (usec): min=19943, max=50899, avg=33720.87, stdev=2219.41 00:36:25.121 clat percentiles (usec): 00:36:25.121 | 1.00th=[23200], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.121 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.121 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.121 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45876], 99.95th=[51119], 00:36:25.121 | 99.99th=[51119] 00:36:25.121 bw ( KiB/s): min= 1788, max= 2043, per=4.03%, avg=1892.11, stdev=68.05, samples=19 00:36:25.121 iops : min= 447, max= 510, avg=472.95, stdev=16.90, samples=19 00:36:25.121 lat (msec) : 20=0.08%, 50=99.83%, 100=0.08% 00:36:25.121 cpu : usr=98.89%, sys=0.80%, ctx=17, majf=0, minf=9 00:36:25.121 IO depths : 1=5.8%, 2=11.7%, 4=24.0%, 8=51.8%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:25.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 complete : 0=0.0%, 4=93.8%, 8=0.3%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.121 filename1: (groupid=0, jobs=1): err= 0: pid=3296600: Tue Nov 5 04:47:37 2024 00:36:25.121 read: IOPS=542, BW=2171KiB/s (2224kB/s)(21.2MiB/10021msec) 00:36:25.121 slat (nsec): min=5369, max=71005, avg=10790.28, stdev=10019.02 00:36:25.121 clat (usec): min=18169, max=36279, avg=29383.00, stdev=5175.05 00:36:25.121 lat (usec): min=18179, max=36286, avg=29393.79, stdev=5177.82 00:36:25.121 clat percentiles (usec): 00:36:25.121 | 1.00th=[19530], 5.00th=[20317], 10.00th=[21890], 20.00th=[23725], 00:36:25.121 | 30.00th=[24511], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:36:25.121 | 70.00th=[33424], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:36:25.121 | 99.00th=[35390], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:36:25.121 | 99.99th=[36439] 00:36:25.121 bw ( KiB/s): min= 1792, max= 2560, per=4.62%, avg=2168.47, stdev=264.23, samples=19 00:36:25.121 iops : min= 448, max= 640, avg=542.05, stdev=66.00, samples=19 00:36:25.121 lat (msec) : 20=2.48%, 50=97.52% 00:36:25.121 cpu : usr=99.14%, sys=0.55%, ctx=35, majf=0, minf=10 00:36:25.121 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.121 issued rwts: total=5440,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.121 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.121 filename1: (groupid=0, jobs=1): err= 0: pid=3296601: Tue Nov 5 04:47:37 2024 00:36:25.121 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.4MiB/10001msec) 00:36:25.121 slat (nsec): min=5577, max=83009, avg=14215.34, stdev=12410.24 00:36:25.121 clat (usec): min=20455, max=58629, avg=33789.73, stdev=1443.47 00:36:25.121 lat (usec): min=20464, max=58647, avg=33803.95, stdev=1441.61 00:36:25.121 clat percentiles (usec): 00:36:25.121 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:25.121 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.121 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.121 | 99.00th=[36439], 99.50th=[36963], 99.90th=[46400], 99.95th=[46400], 00:36:25.121 | 99.99th=[58459] 00:36:25.121 bw ( KiB/s): min= 1788, max= 2048, per=4.01%, avg=1885.58, stdev=83.00, samples=19 00:36:25.121 iops : min= 447, max= 512, avg=471.32, stdev=20.72, samples=19 00:36:25.121 lat (msec) : 50=99.96%, 100=0.04% 00:36:25.122 cpu : usr=98.85%, sys=0.85%, ctx=12, majf=0, minf=9 00:36:25.122 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.122 filename1: (groupid=0, jobs=1): err= 0: pid=3296602: Tue Nov 5 04:47:37 2024 00:36:25.122 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10009msec) 00:36:25.122 slat (nsec): min=5575, max=86107, avg=24172.36, stdev=16880.56 00:36:25.122 clat (usec): min=9211, max=76805, avg=33580.01, stdev=2897.67 00:36:25.122 lat (usec): min=9216, max=76830, avg=33604.18, stdev=2897.77 00:36:25.122 clat percentiles (usec): 00:36:25.122 | 1.00th=[26346], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.122 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.122 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:25.122 | 99.00th=[36439], 99.50th=[36439], 99.90th=[64750], 99.95th=[64750], 00:36:25.122 | 99.99th=[77071] 00:36:25.122 bw ( KiB/s): min= 1788, max= 2048, per=4.01%, avg=1885.63, stdev=71.71, samples=19 00:36:25.122 iops : min= 447, max= 512, avg=471.37, stdev=17.98, samples=19 00:36:25.122 lat (msec) : 10=0.34%, 20=0.34%, 50=98.99%, 100=0.34% 00:36:25.122 cpu : usr=99.05%, sys=0.64%, ctx=15, majf=0, minf=9 00:36:25.122 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.122 filename1: (groupid=0, jobs=1): err= 0: pid=3296603: Tue Nov 5 04:47:37 2024 00:36:25.122 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10009msec) 00:36:25.122 slat (nsec): min=5491, max=78051, avg=20943.21, stdev=13266.85 00:36:25.122 clat (usec): min=9199, max=64869, avg=33617.21, stdev=2860.51 00:36:25.122 lat (usec): min=9208, max=64888, avg=33638.15, stdev=2860.81 00:36:25.122 clat percentiles (usec): 00:36:25.122 | 1.00th=[26346], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:25.122 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.122 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:25.122 | 99.00th=[36439], 99.50th=[36963], 99.90th=[64750], 99.95th=[64750], 00:36:25.122 | 99.99th=[64750] 00:36:25.122 bw ( KiB/s): min= 1788, max= 2048, per=4.01%, avg=1885.63, stdev=71.71, samples=19 00:36:25.122 iops : min= 447, max= 512, avg=471.37, stdev=17.98, samples=19 00:36:25.122 lat (msec) : 10=0.34%, 20=0.34%, 50=98.99%, 100=0.34% 00:36:25.122 cpu : usr=99.11%, sys=0.59%, ctx=27, majf=0, minf=9 00:36:25.122 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.122 filename2: (groupid=0, jobs=1): err= 0: pid=3296604: Tue Nov 5 04:47:37 2024 00:36:25.122 read: IOPS=488, BW=1955KiB/s (2002kB/s)(19.1MiB/10003msec) 00:36:25.122 slat (nsec): min=5559, max=75344, avg=14723.91, stdev=11938.23 00:36:25.122 clat (usec): min=9779, max=53416, avg=32608.86, stdev=3958.79 00:36:25.122 lat (usec): min=9790, max=53436, avg=32623.58, stdev=3959.18 00:36:25.122 clat percentiles (usec): 00:36:25.122 | 1.00th=[17957], 5.00th=[23462], 10.00th=[27132], 20.00th=[32375], 00:36:25.122 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33424], 60.00th=[33817], 00:36:25.122 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:25.122 | 99.00th=[36439], 99.50th=[46400], 99.90th=[53216], 99.95th=[53216], 00:36:25.122 | 99.99th=[53216] 00:36:25.122 bw ( KiB/s): min= 1788, max= 2320, per=4.14%, avg=1942.95, stdev=135.30, samples=19 00:36:25.122 iops : min= 447, max= 580, avg=485.74, stdev=33.82, samples=19 00:36:25.122 lat (msec) : 10=0.29%, 20=1.27%, 50=98.20%, 100=0.25% 00:36:25.122 cpu : usr=98.74%, sys=0.94%, ctx=25, majf=0, minf=9 00:36:25.122 IO depths : 1=5.6%, 2=11.1%, 4=22.9%, 8=53.5%, 16=7.0%, 32=0.0%, >=64=0.0% 00:36:25.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 issued rwts: total=4890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.122 filename2: (groupid=0, jobs=1): err= 0: pid=3296605: Tue Nov 5 04:47:37 2024 00:36:25.122 read: IOPS=476, BW=1907KiB/s (1953kB/s)(18.6MiB/10002msec) 00:36:25.122 slat (nsec): min=5519, max=84354, avg=10282.12, stdev=8280.07 00:36:25.122 clat (usec): min=15434, max=36540, avg=33480.15, stdev=2171.91 00:36:25.122 lat (usec): min=15440, max=36547, avg=33490.43, stdev=2170.11 00:36:25.122 clat percentiles (usec): 00:36:25.122 | 1.00th=[21365], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:36:25.122 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.122 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.122 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:36:25.122 | 99.99th=[36439] 00:36:25.122 bw ( KiB/s): min= 1788, max= 2048, per=4.06%, avg=1905.84, stdev=84.02, samples=19 00:36:25.122 iops : min= 447, max= 512, avg=476.42, stdev=20.94, samples=19 00:36:25.122 lat (msec) : 20=0.67%, 50=99.33% 00:36:25.122 cpu : usr=98.99%, sys=0.71%, ctx=17, majf=0, minf=11 00:36:25.122 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:36:25.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.122 filename2: (groupid=0, jobs=1): err= 0: pid=3296606: Tue Nov 5 04:47:37 2024 00:36:25.122 read: IOPS=474, BW=1897KiB/s (1942kB/s)(18.5MiB/10001msec) 00:36:25.122 slat (nsec): min=5613, max=80932, avg=21215.18, stdev=15758.32 00:36:25.122 clat (usec): min=12129, max=55492, avg=33568.09, stdev=2795.98 00:36:25.122 lat (usec): min=12137, max=55512, avg=33589.30, stdev=2795.81 00:36:25.122 clat percentiles (usec): 00:36:25.122 | 1.00th=[21627], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.122 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.122 | 70.00th=[34341], 80.00th=[34866], 90.00th=[34866], 95.00th=[35390], 00:36:25.122 | 99.00th=[36963], 99.50th=[49021], 99.90th=[55313], 99.95th=[55313], 00:36:25.122 | 99.99th=[55313] 00:36:25.122 bw ( KiB/s): min= 1788, max= 2048, per=4.03%, avg=1894.74, stdev=70.54, samples=19 00:36:25.122 iops : min= 447, max= 512, avg=473.68, stdev=17.64, samples=19 00:36:25.122 lat (msec) : 20=0.42%, 50=99.16%, 100=0.42% 00:36:25.122 cpu : usr=98.47%, sys=0.93%, ctx=95, majf=0, minf=9 00:36:25.122 IO depths : 1=5.6%, 2=11.7%, 4=24.5%, 8=51.2%, 16=6.9%, 32=0.0%, >=64=0.0% 00:36:25.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 issued rwts: total=4742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.122 filename2: (groupid=0, jobs=1): err= 0: pid=3296607: Tue Nov 5 04:47:37 2024 00:36:25.122 read: IOPS=482, BW=1928KiB/s (1974kB/s)(18.8MiB/10004msec) 00:36:25.122 slat (nsec): min=5552, max=72616, avg=15503.73, stdev=10585.29 00:36:25.122 clat (usec): min=18195, max=61415, avg=33052.92, stdev=4040.77 00:36:25.122 lat (usec): min=18201, max=61435, avg=33068.42, stdev=4041.73 00:36:25.122 clat percentiles (usec): 00:36:25.122 | 1.00th=[21103], 5.00th=[24511], 10.00th=[29230], 20.00th=[32375], 00:36:25.122 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33424], 60.00th=[33817], 00:36:25.122 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:25.122 | 99.00th=[50070], 99.50th=[54264], 99.90th=[61604], 99.95th=[61604], 00:36:25.122 | 99.99th=[61604] 00:36:25.122 bw ( KiB/s): min= 1792, max= 2192, per=4.11%, avg=1928.16, stdev=97.68, samples=19 00:36:25.122 iops : min= 448, max= 548, avg=482.00, stdev=24.37, samples=19 00:36:25.122 lat (msec) : 20=0.08%, 50=98.71%, 100=1.20% 00:36:25.122 cpu : usr=98.77%, sys=0.84%, ctx=43, majf=0, minf=9 00:36:25.122 IO depths : 1=5.1%, 2=10.2%, 4=21.7%, 8=55.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:36:25.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 complete : 0=0.0%, 4=93.2%, 8=1.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.122 filename2: (groupid=0, jobs=1): err= 0: pid=3296608: Tue Nov 5 04:47:37 2024 00:36:25.122 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10009msec) 00:36:25.122 slat (nsec): min=5506, max=75905, avg=22148.20, stdev=13666.92 00:36:25.122 clat (usec): min=9239, max=65017, avg=33612.48, stdev=2905.31 00:36:25.122 lat (usec): min=9246, max=65045, avg=33634.63, stdev=2905.56 00:36:25.122 clat percentiles (usec): 00:36:25.122 | 1.00th=[26346], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.122 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.122 | 70.00th=[34341], 80.00th=[34341], 90.00th=[34866], 95.00th=[35390], 00:36:25.122 | 99.00th=[36439], 99.50th=[36963], 99.90th=[64750], 99.95th=[64750], 00:36:25.122 | 99.99th=[65274] 00:36:25.122 bw ( KiB/s): min= 1788, max= 2048, per=4.01%, avg=1885.63, stdev=71.91, samples=19 00:36:25.122 iops : min= 447, max= 512, avg=471.37, stdev=18.03, samples=19 00:36:25.122 lat (msec) : 10=0.34%, 20=0.34%, 50=98.90%, 100=0.42% 00:36:25.122 cpu : usr=97.79%, sys=1.39%, ctx=850, majf=0, minf=9 00:36:25.122 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:36:25.122 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.122 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.122 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.122 filename2: (groupid=0, jobs=1): err= 0: pid=3296609: Tue Nov 5 04:47:37 2024 00:36:25.122 read: IOPS=505, BW=2024KiB/s (2072kB/s)(19.8MiB/10009msec) 00:36:25.122 slat (nsec): min=5551, max=74907, avg=14389.72, stdev=11532.92 00:36:25.122 clat (usec): min=11443, max=82264, avg=31531.26, stdev=6398.12 00:36:25.123 lat (usec): min=11466, max=82292, avg=31545.65, stdev=6399.92 00:36:25.123 clat percentiles (usec): 00:36:25.123 | 1.00th=[19006], 5.00th=[22152], 10.00th=[22676], 20.00th=[25560], 00:36:25.123 | 30.00th=[28443], 40.00th=[32375], 50.00th=[32900], 60.00th=[33424], 00:36:25.123 | 70.00th=[33817], 80.00th=[34341], 90.00th=[35390], 95.00th=[40633], 00:36:25.123 | 99.00th=[52167], 99.50th=[53740], 99.90th=[64226], 99.95th=[64226], 00:36:25.123 | 99.99th=[82314] 00:36:25.123 bw ( KiB/s): min= 1771, max= 2368, per=4.31%, avg=2023.63, stdev=130.69, samples=19 00:36:25.123 iops : min= 442, max= 592, avg=505.79, stdev=32.69, samples=19 00:36:25.123 lat (msec) : 20=1.03%, 50=96.92%, 100=2.05% 00:36:25.123 cpu : usr=98.62%, sys=0.92%, ctx=115, majf=0, minf=9 00:36:25.123 IO depths : 1=1.4%, 2=3.8%, 4=12.0%, 8=70.3%, 16=12.6%, 32=0.0%, >=64=0.0% 00:36:25.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.123 complete : 0=0.0%, 4=90.9%, 8=4.9%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.123 issued rwts: total=5064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.123 filename2: (groupid=0, jobs=1): err= 0: pid=3296610: Tue Nov 5 04:47:37 2024 00:36:25.123 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10010msec) 00:36:25.123 slat (nsec): min=5583, max=61665, avg=17232.50, stdev=9651.81 00:36:25.123 clat (usec): min=9351, max=54338, avg=33653.60, stdev=2268.02 00:36:25.123 lat (usec): min=9357, max=54359, avg=33670.83, stdev=2267.08 00:36:25.123 clat percentiles (usec): 00:36:25.123 | 1.00th=[31065], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.123 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.123 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.123 | 99.00th=[36439], 99.50th=[36439], 99.90th=[54264], 99.95th=[54264], 00:36:25.123 | 99.99th=[54264] 00:36:25.123 bw ( KiB/s): min= 1788, max= 2043, per=4.01%, avg=1885.58, stdev=71.19, samples=19 00:36:25.123 iops : min= 447, max= 510, avg=471.32, stdev=17.76, samples=19 00:36:25.123 lat (msec) : 10=0.04%, 20=0.59%, 50=99.03%, 100=0.34% 00:36:25.123 cpu : usr=98.87%, sys=0.74%, ctx=67, majf=0, minf=9 00:36:25.123 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:25.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.123 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.123 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.123 filename2: (groupid=0, jobs=1): err= 0: pid=3296611: Tue Nov 5 04:47:37 2024 00:36:25.123 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10006msec) 00:36:25.123 slat (nsec): min=5612, max=66529, avg=13378.40, stdev=9074.83 00:36:25.123 clat (usec): min=18576, max=50096, avg=33581.54, stdev=1999.63 00:36:25.123 lat (usec): min=18607, max=50105, avg=33594.92, stdev=1998.05 00:36:25.123 clat percentiles (usec): 00:36:25.123 | 1.00th=[21103], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:36:25.123 | 30.00th=[32900], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:36:25.123 | 70.00th=[34341], 80.00th=[34866], 90.00th=[35390], 95.00th=[35390], 00:36:25.123 | 99.00th=[35914], 99.50th=[36439], 99.90th=[44303], 99.95th=[44827], 00:36:25.123 | 99.99th=[50070] 00:36:25.123 bw ( KiB/s): min= 1788, max= 2048, per=4.04%, avg=1899.32, stdev=76.86, samples=19 00:36:25.123 iops : min= 447, max= 512, avg=474.79, stdev=19.14, samples=19 00:36:25.123 lat (msec) : 20=0.67%, 50=99.28%, 100=0.04% 00:36:25.123 cpu : usr=98.64%, sys=0.89%, ctx=138, majf=0, minf=9 00:36:25.123 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:36:25.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.123 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:25.123 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:25.123 latency : target=0, window=0, percentile=100.00%, depth=16 00:36:25.123 00:36:25.123 Run status group 0 (all jobs): 00:36:25.123 READ: bw=45.9MiB/s (48.1MB/s), 1888KiB/s-2510KiB/s (1933kB/s-2570kB/s), io=461MiB (483MB), run=10001-10052msec 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 bdev_null0 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 [2024-11-05 04:47:37.528393] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 bdev_null1 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.123 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:25.124 { 00:36:25.124 "params": { 00:36:25.124 "name": "Nvme$subsystem", 00:36:25.124 "trtype": "$TEST_TRANSPORT", 00:36:25.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:25.124 "adrfam": "ipv4", 00:36:25.124 "trsvcid": "$NVMF_PORT", 00:36:25.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:25.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:25.124 "hdgst": ${hdgst:-false}, 00:36:25.124 "ddgst": ${ddgst:-false} 00:36:25.124 }, 00:36:25.124 "method": "bdev_nvme_attach_controller" 00:36:25.124 } 00:36:25.124 EOF 00:36:25.124 )") 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:25.124 { 00:36:25.124 "params": { 00:36:25.124 "name": "Nvme$subsystem", 00:36:25.124 "trtype": "$TEST_TRANSPORT", 00:36:25.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:25.124 "adrfam": "ipv4", 00:36:25.124 "trsvcid": "$NVMF_PORT", 00:36:25.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:25.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:25.124 "hdgst": ${hdgst:-false}, 00:36:25.124 "ddgst": ${ddgst:-false} 00:36:25.124 }, 00:36:25.124 "method": "bdev_nvme_attach_controller" 00:36:25.124 } 00:36:25.124 EOF 00:36:25.124 )") 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:25.124 "params": { 00:36:25.124 "name": "Nvme0", 00:36:25.124 "trtype": "tcp", 00:36:25.124 "traddr": "10.0.0.2", 00:36:25.124 "adrfam": "ipv4", 00:36:25.124 "trsvcid": "4420", 00:36:25.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:25.124 "hdgst": false, 00:36:25.124 "ddgst": false 00:36:25.124 }, 00:36:25.124 "method": "bdev_nvme_attach_controller" 00:36:25.124 },{ 00:36:25.124 "params": { 00:36:25.124 "name": "Nvme1", 00:36:25.124 "trtype": "tcp", 00:36:25.124 "traddr": "10.0.0.2", 00:36:25.124 "adrfam": "ipv4", 00:36:25.124 "trsvcid": "4420", 00:36:25.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:25.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:25.124 "hdgst": false, 00:36:25.124 "ddgst": false 00:36:25.124 }, 00:36:25.124 "method": "bdev_nvme_attach_controller" 00:36:25.124 }' 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:25.124 04:47:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:25.124 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:25.124 ... 00:36:25.124 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:36:25.124 ... 00:36:25.124 fio-3.35 00:36:25.124 Starting 4 threads 00:36:30.409 00:36:30.409 filename0: (groupid=0, jobs=1): err= 0: pid=3298908: Tue Nov 5 04:47:43 2024 00:36:30.409 read: IOPS=2079, BW=16.2MiB/s (17.0MB/s)(81.3MiB/5002msec) 00:36:30.409 slat (nsec): min=5386, max=64867, avg=6096.50, stdev=2097.75 00:36:30.409 clat (usec): min=1402, max=6634, avg=3829.45, stdev=705.20 00:36:30.409 lat (usec): min=1419, max=6640, avg=3835.54, stdev=705.10 00:36:30.409 clat percentiles (usec): 00:36:30.409 | 1.00th=[ 2671], 5.00th=[ 3163], 10.00th=[ 3326], 20.00th=[ 3425], 00:36:30.409 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3654], 00:36:30.409 | 70.00th=[ 3720], 80.00th=[ 3884], 90.00th=[ 5276], 95.00th=[ 5407], 00:36:30.409 | 99.00th=[ 5538], 99.50th=[ 5735], 99.90th=[ 5932], 99.95th=[ 5997], 00:36:30.409 | 99.99th=[ 6652] 00:36:30.409 bw ( KiB/s): min=16288, max=17008, per=24.82%, avg=16586.67, stdev=257.00, samples=9 00:36:30.409 iops : min= 2036, max= 2126, avg=2073.33, stdev=32.12, samples=9 00:36:30.409 lat (msec) : 2=0.23%, 4=80.81%, 10=18.96% 00:36:30.409 cpu : usr=96.82%, sys=2.96%, ctx=6, majf=0, minf=109 00:36:30.409 IO depths : 1=0.1%, 2=0.1%, 4=72.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.409 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.409 issued rwts: total=10404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:30.409 filename0: (groupid=0, jobs=1): err= 0: pid=3298909: Tue Nov 5 04:47:43 2024 00:36:30.409 read: IOPS=2147, BW=16.8MiB/s (17.6MB/s)(83.9MiB/5002msec) 00:36:30.409 slat (nsec): min=5399, max=64179, avg=8179.13, stdev=2549.45 00:36:30.410 clat (usec): min=1486, max=6725, avg=3706.35, stdev=490.09 00:36:30.410 lat (usec): min=1495, max=6749, avg=3714.53, stdev=489.78 00:36:30.410 clat percentiles (usec): 00:36:30.410 | 1.00th=[ 3195], 5.00th=[ 3326], 10.00th=[ 3359], 20.00th=[ 3425], 00:36:30.410 | 30.00th=[ 3523], 40.00th=[ 3556], 50.00th=[ 3589], 60.00th=[ 3621], 00:36:30.410 | 70.00th=[ 3687], 80.00th=[ 3785], 90.00th=[ 4047], 95.00th=[ 5211], 00:36:30.410 | 99.00th=[ 5473], 99.50th=[ 5604], 99.90th=[ 5866], 99.95th=[ 6128], 00:36:30.410 | 99.99th=[ 6718] 00:36:30.410 bw ( KiB/s): min=16416, max=17792, per=25.81%, avg=17246.22, stdev=546.26, samples=9 00:36:30.410 iops : min= 2052, max= 2224, avg=2155.78, stdev=68.28, samples=9 00:36:30.410 lat (msec) : 2=0.03%, 4=89.09%, 10=10.88% 00:36:30.410 cpu : usr=96.84%, sys=2.90%, ctx=7, majf=0, minf=72 00:36:30.410 IO depths : 1=0.1%, 2=0.1%, 4=66.9%, 8=33.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.410 complete : 0=0.0%, 4=97.0%, 8=3.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.410 issued rwts: total=10743,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:30.410 filename1: (groupid=0, jobs=1): err= 0: pid=3298910: Tue Nov 5 04:47:43 2024 00:36:30.410 read: IOPS=2081, BW=16.3MiB/s (17.0MB/s)(81.3MiB/5002msec) 00:36:30.410 slat (nsec): min=5385, max=43227, avg=6184.68, stdev=2072.88 00:36:30.410 clat (usec): min=2118, max=6071, avg=3827.14, stdev=681.79 00:36:30.410 lat (usec): min=2124, max=6077, avg=3833.32, stdev=681.69 00:36:30.410 clat percentiles (usec): 00:36:30.410 | 1.00th=[ 2835], 5.00th=[ 3228], 10.00th=[ 3326], 20.00th=[ 3458], 00:36:30.410 | 30.00th=[ 3490], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3654], 00:36:30.410 | 70.00th=[ 3720], 80.00th=[ 3884], 90.00th=[ 5211], 95.00th=[ 5342], 00:36:30.410 | 99.00th=[ 5604], 99.50th=[ 5669], 99.90th=[ 5866], 99.95th=[ 5997], 00:36:30.410 | 99.99th=[ 6063] 00:36:30.410 bw ( KiB/s): min=16368, max=16848, per=24.89%, avg=16631.11, stdev=177.65, samples=9 00:36:30.410 iops : min= 2046, max= 2106, avg=2078.89, stdev=22.21, samples=9 00:36:30.410 lat (msec) : 4=81.72%, 10=18.28% 00:36:30.410 cpu : usr=97.04%, sys=2.74%, ctx=7, majf=0, minf=131 00:36:30.410 IO depths : 1=0.1%, 2=0.1%, 4=72.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.410 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.410 issued rwts: total=10410,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:30.410 filename1: (groupid=0, jobs=1): err= 0: pid=3298912: Tue Nov 5 04:47:43 2024 00:36:30.410 read: IOPS=2044, BW=16.0MiB/s (16.7MB/s)(79.9MiB/5002msec) 00:36:30.410 slat (nsec): min=5386, max=45233, avg=6020.64, stdev=1845.23 00:36:30.410 clat (usec): min=1441, max=7932, avg=3896.33, stdev=715.11 00:36:30.410 lat (usec): min=1447, max=7960, avg=3902.35, stdev=715.05 00:36:30.410 clat percentiles (usec): 00:36:30.410 | 1.00th=[ 3195], 5.00th=[ 3228], 10.00th=[ 3425], 20.00th=[ 3458], 00:36:30.410 | 30.00th=[ 3490], 40.00th=[ 3556], 50.00th=[ 3621], 60.00th=[ 3720], 00:36:30.410 | 70.00th=[ 3785], 80.00th=[ 4047], 90.00th=[ 5276], 95.00th=[ 5473], 00:36:30.410 | 99.00th=[ 5800], 99.50th=[ 5866], 99.90th=[ 6259], 99.95th=[ 7635], 00:36:30.410 | 99.99th=[ 7701] 00:36:30.410 bw ( KiB/s): min=16080, max=16672, per=24.44%, avg=16334.22, stdev=189.05, samples=9 00:36:30.410 iops : min= 2010, max= 2084, avg=2041.78, stdev=23.63, samples=9 00:36:30.410 lat (msec) : 2=0.03%, 4=79.15%, 10=20.82% 00:36:30.410 cpu : usr=97.40%, sys=2.30%, ctx=38, majf=0, minf=59 00:36:30.410 IO depths : 1=0.1%, 2=0.1%, 4=72.7%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:30.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.410 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:30.410 issued rwts: total=10226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:30.410 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:30.410 00:36:30.410 Run status group 0 (all jobs): 00:36:30.410 READ: bw=65.3MiB/s (68.4MB/s), 16.0MiB/s-16.8MiB/s (16.7MB/s-17.6MB/s), io=326MiB (342MB), run=5002-5002msec 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.410 04:47:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.410 04:47:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.410 00:36:30.410 real 0m24.750s 00:36:30.410 user 5m18.573s 00:36:30.410 sys 0m4.331s 00:36:30.410 04:47:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:30.410 04:47:44 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:30.410 ************************************ 00:36:30.410 END TEST fio_dif_rand_params 00:36:30.410 ************************************ 00:36:30.410 04:47:44 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:30.410 04:47:44 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:30.410 04:47:44 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:30.410 04:47:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:30.671 ************************************ 00:36:30.671 START TEST fio_dif_digest 00:36:30.671 ************************************ 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:30.671 bdev_null0 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:30.671 [2024-11-05 04:47:44.131141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:30.671 { 00:36:30.671 "params": { 00:36:30.671 "name": "Nvme$subsystem", 00:36:30.671 "trtype": "$TEST_TRANSPORT", 00:36:30.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:30.671 "adrfam": "ipv4", 00:36:30.671 "trsvcid": "$NVMF_PORT", 00:36:30.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:30.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:30.671 "hdgst": ${hdgst:-false}, 00:36:30.671 "ddgst": ${ddgst:-false} 00:36:30.671 }, 00:36:30.671 "method": "bdev_nvme_attach_controller" 00:36:30.671 } 00:36:30.671 EOF 00:36:30.671 )") 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:30.671 "params": { 00:36:30.671 "name": "Nvme0", 00:36:30.671 "trtype": "tcp", 00:36:30.671 "traddr": "10.0.0.2", 00:36:30.671 "adrfam": "ipv4", 00:36:30.671 "trsvcid": "4420", 00:36:30.671 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:30.671 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:30.671 "hdgst": true, 00:36:30.671 "ddgst": true 00:36:30.671 }, 00:36:30.671 "method": "bdev_nvme_attach_controller" 00:36:30.671 }' 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:36:30.671 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:30.672 04:47:44 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:31.240 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:31.240 ... 00:36:31.240 fio-3.35 00:36:31.240 Starting 3 threads 00:36:43.470 00:36:43.470 filename0: (groupid=0, jobs=1): err= 0: pid=3300312: Tue Nov 5 04:47:55 2024 00:36:43.470 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(268MiB/10006msec) 00:36:43.470 slat (nsec): min=5759, max=34690, avg=6483.14, stdev=1022.16 00:36:43.470 clat (usec): min=7964, max=56401, avg=13998.97, stdev=4004.39 00:36:43.470 lat (usec): min=7970, max=56407, avg=14005.46, stdev=4004.41 00:36:43.470 clat percentiles (usec): 00:36:43.470 | 1.00th=[ 9503], 5.00th=[10683], 10.00th=[12125], 20.00th=[12780], 00:36:43.470 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[14091], 00:36:43.470 | 70.00th=[14353], 80.00th=[14746], 90.00th=[15270], 95.00th=[15795], 00:36:43.470 | 99.00th=[17433], 99.50th=[55313], 99.90th=[55837], 99.95th=[56361], 00:36:43.470 | 99.99th=[56361] 00:36:43.470 bw ( KiB/s): min=25600, max=29440, per=32.75%, avg=27430.40, stdev=997.64, samples=20 00:36:43.470 iops : min= 200, max= 230, avg=214.30, stdev= 7.79, samples=20 00:36:43.470 lat (msec) : 10=2.52%, 20=96.64%, 100=0.84% 00:36:43.470 cpu : usr=94.99%, sys=4.77%, ctx=13, majf=0, minf=157 00:36:43.470 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:43.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.470 issued rwts: total=2143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:43.470 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:43.470 filename0: (groupid=0, jobs=1): err= 0: pid=3300313: Tue Nov 5 04:47:55 2024 00:36:43.470 read: IOPS=201, BW=25.2MiB/s (26.4MB/s)(253MiB/10047msec) 00:36:43.470 slat (nsec): min=5714, max=30854, avg=6450.74, stdev=840.36 00:36:43.470 clat (usec): min=8613, max=56469, avg=14840.56, stdev=6090.58 00:36:43.470 lat (usec): min=8619, max=56476, avg=14847.01, stdev=6090.58 00:36:43.470 clat percentiles (usec): 00:36:43.470 | 1.00th=[10028], 5.00th=[11994], 10.00th=[12518], 20.00th=[13042], 00:36:43.470 | 30.00th=[13435], 40.00th=[13698], 50.00th=[13960], 60.00th=[14222], 00:36:43.470 | 70.00th=[14484], 80.00th=[15008], 90.00th=[15533], 95.00th=[16188], 00:36:43.470 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:36:43.470 | 99.99th=[56361] 00:36:43.470 bw ( KiB/s): min=21248, max=28672, per=30.95%, avg=25920.00, stdev=1895.59, samples=20 00:36:43.470 iops : min= 166, max= 224, avg=202.50, stdev=14.81, samples=20 00:36:43.470 lat (msec) : 10=0.94%, 20=96.74%, 50=0.05%, 100=2.27% 00:36:43.470 cpu : usr=95.65%, sys=4.12%, ctx=20, majf=0, minf=130 00:36:43.470 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:43.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.470 issued rwts: total=2027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:43.470 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:43.470 filename0: (groupid=0, jobs=1): err= 0: pid=3300314: Tue Nov 5 04:47:55 2024 00:36:43.470 read: IOPS=239, BW=29.9MiB/s (31.4MB/s)(301MiB/10047msec) 00:36:43.470 slat (nsec): min=5780, max=41927, avg=6502.55, stdev=1273.00 00:36:43.470 clat (usec): min=5858, max=51706, avg=12511.62, stdev=1884.50 00:36:43.470 lat (usec): min=5864, max=51712, avg=12518.13, stdev=1884.50 00:36:43.470 clat percentiles (usec): 00:36:43.470 | 1.00th=[ 7898], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[11600], 00:36:43.470 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12780], 60.00th=[13042], 00:36:43.470 | 70.00th=[13304], 80.00th=[13566], 90.00th=[14091], 95.00th=[14484], 00:36:43.470 | 99.00th=[15401], 99.50th=[15795], 99.90th=[16581], 99.95th=[49021], 00:36:43.470 | 99.99th=[51643] 00:36:43.470 bw ( KiB/s): min=29440, max=34048, per=36.71%, avg=30745.60, stdev=1027.03, samples=20 00:36:43.470 iops : min= 230, max= 266, avg=240.20, stdev= 8.02, samples=20 00:36:43.470 lat (msec) : 10=8.90%, 20=91.01%, 50=0.04%, 100=0.04% 00:36:43.470 cpu : usr=94.89%, sys=4.86%, ctx=18, majf=0, minf=136 00:36:43.470 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:43.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:43.470 issued rwts: total=2404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:43.470 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:43.470 00:36:43.470 Run status group 0 (all jobs): 00:36:43.470 READ: bw=81.8MiB/s (85.8MB/s), 25.2MiB/s-29.9MiB/s (26.4MB/s-31.4MB/s), io=822MiB (862MB), run=10006-10047msec 00:36:43.470 04:47:55 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:43.470 04:47:55 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:43.471 00:36:43.471 real 0m11.289s 00:36:43.471 user 0m40.754s 00:36:43.471 sys 0m1.739s 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:43.471 04:47:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:43.471 ************************************ 00:36:43.471 END TEST fio_dif_digest 00:36:43.471 ************************************ 00:36:43.471 04:47:55 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:43.471 04:47:55 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:43.471 rmmod nvme_tcp 00:36:43.471 rmmod nvme_fabrics 00:36:43.471 rmmod nvme_keyring 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 3289924 ']' 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 3289924 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 3289924 ']' 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 3289924 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3289924 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3289924' 00:36:43.471 killing process with pid 3289924 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@971 -- # kill 3289924 00:36:43.471 04:47:55 nvmf_dif -- common/autotest_common.sh@976 -- # wait 3289924 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:43.471 04:47:55 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:45.385 Waiting for block devices as requested 00:36:45.385 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:45.385 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:45.385 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:45.385 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:45.385 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:45.385 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:45.645 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:45.645 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:45.645 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:36:45.905 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:36:45.905 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:36:46.171 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:36:46.171 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:36:46.171 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:36:46.171 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:36:46.434 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:36:46.434 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:36:46.700 04:48:00 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:46.700 04:48:00 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:46.700 04:48:00 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:36:46.700 04:48:00 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:36:46.700 04:48:00 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:46.700 04:48:00 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:36:46.700 04:48:00 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:46.700 04:48:00 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:46.700 04:48:00 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.700 04:48:00 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:46.700 04:48:00 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.723 04:48:02 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:48.723 00:36:48.723 real 1m17.903s 00:36:48.723 user 8m0.496s 00:36:48.723 sys 0m21.057s 00:36:48.723 04:48:02 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:48.723 04:48:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:48.723 ************************************ 00:36:48.723 END TEST nvmf_dif 00:36:48.723 ************************************ 00:36:48.723 04:48:02 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:48.723 04:48:02 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:48.723 04:48:02 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:48.723 04:48:02 -- common/autotest_common.sh@10 -- # set +x 00:36:48.984 ************************************ 00:36:48.984 START TEST nvmf_abort_qd_sizes 00:36:48.984 ************************************ 00:36:48.984 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:48.985 * Looking for test storage... 00:36:48.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:48.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.985 --rc genhtml_branch_coverage=1 00:36:48.985 --rc genhtml_function_coverage=1 00:36:48.985 --rc genhtml_legend=1 00:36:48.985 --rc geninfo_all_blocks=1 00:36:48.985 --rc geninfo_unexecuted_blocks=1 00:36:48.985 00:36:48.985 ' 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:48.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.985 --rc genhtml_branch_coverage=1 00:36:48.985 --rc genhtml_function_coverage=1 00:36:48.985 --rc genhtml_legend=1 00:36:48.985 --rc geninfo_all_blocks=1 00:36:48.985 --rc geninfo_unexecuted_blocks=1 00:36:48.985 00:36:48.985 ' 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:48.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.985 --rc genhtml_branch_coverage=1 00:36:48.985 --rc genhtml_function_coverage=1 00:36:48.985 --rc genhtml_legend=1 00:36:48.985 --rc geninfo_all_blocks=1 00:36:48.985 --rc geninfo_unexecuted_blocks=1 00:36:48.985 00:36:48.985 ' 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:48.985 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.985 --rc genhtml_branch_coverage=1 00:36:48.985 --rc genhtml_function_coverage=1 00:36:48.985 --rc genhtml_legend=1 00:36:48.985 --rc geninfo_all_blocks=1 00:36:48.985 --rc geninfo_unexecuted_blocks=1 00:36:48.985 00:36:48.985 ' 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.985 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:48.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:36:48.986 04:48:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:57.128 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:57.128 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:57.128 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.128 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:57.129 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:57.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:57.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.661 ms 00:36:57.129 00:36:57.129 --- 10.0.0.2 ping statistics --- 00:36:57.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.129 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:57.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:57.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:36:57.129 00:36:57.129 --- 10.0.0.1 ping statistics --- 00:36:57.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.129 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:57.129 04:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:00.431 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:00.431 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=3310315 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 3310315 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 3310315 ']' 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:00.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:00.431 04:48:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:00.431 [2024-11-05 04:48:14.029986] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:37:00.432 [2024-11-05 04:48:14.030038] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:00.692 [2024-11-05 04:48:14.108554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:00.692 [2024-11-05 04:48:14.148319] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:00.692 [2024-11-05 04:48:14.148352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:00.692 [2024-11-05 04:48:14.148360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:00.692 [2024-11-05 04:48:14.148367] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:00.692 [2024-11-05 04:48:14.148373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:00.692 [2024-11-05 04:48:14.149877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:00.692 [2024-11-05 04:48:14.150036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:00.692 [2024-11-05 04:48:14.150164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:00.692 [2024-11-05 04:48:14.150165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:01.262 04:48:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:01.522 ************************************ 00:37:01.522 START TEST spdk_target_abort 00:37:01.522 ************************************ 00:37:01.522 04:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:37:01.522 04:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:01.522 04:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:37:01.522 04:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.522 04:48:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.783 spdk_targetn1 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.783 [2024-11-05 04:48:15.232787] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:01.783 [2024-11-05 04:48:15.273085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:01.783 04:48:15 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:02.045 [2024-11-05 04:48:15.435164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:288 len:8 PRP1 0x200004abe000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.435190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0025 p:1 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.437637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:456 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.437653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:003a p:1 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.438080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:488 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.438090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:003e p:1 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.443924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:608 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.443938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:004e p:1 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.444258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:640 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.444269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0052 p:1 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.444507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:656 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.444516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0053 p:1 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.466982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1456 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.466998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00b8 p:1 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.471112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1544 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.471129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00c3 p:1 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.472031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:1608 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.472043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00ca p:1 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.500146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2576 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.500161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.525368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:3600 len:8 PRP1 0x200004ac6000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.525384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:00c3 p:0 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.526326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:3664 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.526338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00cb p:0 m:0 dnr:0 00:37:02.045 [2024-11-05 04:48:15.532562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3856 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:37:02.045 [2024-11-05 04:48:15.532576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00e3 p:0 m:0 dnr:0 00:37:05.342 Initializing NVMe Controllers 00:37:05.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:05.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:05.342 Initialization complete. Launching workers. 00:37:05.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12448, failed: 13 00:37:05.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3662, failed to submit 8799 00:37:05.342 success 722, unsuccessful 2940, failed 0 00:37:05.342 04:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:05.342 04:48:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:05.342 [2024-11-05 04:48:18.702858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:184 nsid:1 lba:1040 len:8 PRP1 0x200004e4e000 PRP2 0x0 00:37:05.342 [2024-11-05 04:48:18.702893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:184 cdw0:0 sqhd:008c p:1 m:0 dnr:0 00:37:05.342 [2024-11-05 04:48:18.733931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:1848 len:8 PRP1 0x200004e54000 PRP2 0x0 00:37:05.342 [2024-11-05 04:48:18.733955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:00e8 p:1 m:0 dnr:0 00:37:05.342 [2024-11-05 04:48:18.749855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:174 nsid:1 lba:2128 len:8 PRP1 0x200004e46000 PRP2 0x0 00:37:05.342 [2024-11-05 04:48:18.749878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:174 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:37:05.342 [2024-11-05 04:48:18.804986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:177 nsid:1 lba:3512 len:8 PRP1 0x200004e50000 PRP2 0x0 00:37:05.342 [2024-11-05 04:48:18.805010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:177 cdw0:0 sqhd:00b8 p:0 m:0 dnr:0 00:37:05.342 [2024-11-05 04:48:18.820867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:171 nsid:1 lba:3776 len:8 PRP1 0x200004e4a000 PRP2 0x0 00:37:05.342 [2024-11-05 04:48:18.820893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:171 cdw0:0 sqhd:00e3 p:0 m:0 dnr:0 00:37:06.283 [2024-11-05 04:48:19.652883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:169 nsid:1 lba:23328 len:8 PRP1 0x200004e62000 PRP2 0x0 00:37:06.283 [2024-11-05 04:48:19.652920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:169 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:37:08.192 Initializing NVMe Controllers 00:37:08.192 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:08.192 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:08.192 Initialization complete. Launching workers. 00:37:08.192 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8549, failed: 6 00:37:08.192 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1222, failed to submit 7333 00:37:08.192 success 333, unsuccessful 889, failed 0 00:37:08.192 04:48:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:08.192 04:48:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:10.734 [2024-11-05 04:48:24.272611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:150 nsid:1 lba:256728 len:8 PRP1 0x200004aca000 PRP2 0x0 00:37:10.734 [2024-11-05 04:48:24.272644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:150 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:37:11.676 Initializing NVMe Controllers 00:37:11.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:11.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:11.676 Initialization complete. Launching workers. 00:37:11.676 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42091, failed: 1 00:37:11.676 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2836, failed to submit 39256 00:37:11.676 success 598, unsuccessful 2238, failed 0 00:37:11.676 04:48:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:11.676 04:48:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.676 04:48:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:11.676 04:48:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:11.676 04:48:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:11.676 04:48:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:11.676 04:48:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3310315 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 3310315 ']' 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 3310315 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3310315 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3310315' 00:37:13.583 killing process with pid 3310315 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 3310315 00:37:13.583 04:48:26 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 3310315 00:37:13.583 00:37:13.583 real 0m12.131s 00:37:13.583 user 0m49.682s 00:37:13.583 sys 0m1.786s 00:37:13.583 04:48:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:13.583 04:48:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:13.583 ************************************ 00:37:13.583 END TEST spdk_target_abort 00:37:13.583 ************************************ 00:37:13.583 04:48:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:13.583 04:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:13.583 04:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:13.583 04:48:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:13.583 ************************************ 00:37:13.584 START TEST kernel_target_abort 00:37:13.584 ************************************ 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:13.584 04:48:27 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:16.885 Waiting for block devices as requested 00:37:17.144 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:17.144 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:17.144 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:17.144 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:17.404 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:17.404 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:17.404 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:17.664 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:17.664 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:17.925 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:17.925 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:17.925 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:18.186 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:18.186 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:18.186 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:18.186 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:18.447 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:18.708 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:37:18.708 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:18.709 No valid GPT data, bailing 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:37:18.709 00:37:18.709 Discovery Log Number of Records 2, Generation counter 2 00:37:18.709 =====Discovery Log Entry 0====== 00:37:18.709 trtype: tcp 00:37:18.709 adrfam: ipv4 00:37:18.709 subtype: current discovery subsystem 00:37:18.709 treq: not specified, sq flow control disable supported 00:37:18.709 portid: 1 00:37:18.709 trsvcid: 4420 00:37:18.709 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:18.709 traddr: 10.0.0.1 00:37:18.709 eflags: none 00:37:18.709 sectype: none 00:37:18.709 =====Discovery Log Entry 1====== 00:37:18.709 trtype: tcp 00:37:18.709 adrfam: ipv4 00:37:18.709 subtype: nvme subsystem 00:37:18.709 treq: not specified, sq flow control disable supported 00:37:18.709 portid: 1 00:37:18.709 trsvcid: 4420 00:37:18.709 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:18.709 traddr: 10.0.0.1 00:37:18.709 eflags: none 00:37:18.709 sectype: none 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:18.709 04:48:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:22.010 Initializing NVMe Controllers 00:37:22.010 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:22.010 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:22.010 Initialization complete. Launching workers. 00:37:22.010 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67132, failed: 0 00:37:22.010 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67132, failed to submit 0 00:37:22.010 success 0, unsuccessful 67132, failed 0 00:37:22.010 04:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:22.010 04:48:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:25.309 Initializing NVMe Controllers 00:37:25.309 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:25.309 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:25.309 Initialization complete. Launching workers. 00:37:25.309 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 108177, failed: 0 00:37:25.309 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27238, failed to submit 80939 00:37:25.309 success 0, unsuccessful 27238, failed 0 00:37:25.309 04:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:25.309 04:48:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:28.610 Initializing NVMe Controllers 00:37:28.610 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:37:28.610 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:28.610 Initialization complete. Launching workers. 00:37:28.610 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 101717, failed: 0 00:37:28.610 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25438, failed to submit 76279 00:37:28.610 success 0, unsuccessful 25438, failed 0 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:37:28.610 04:48:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:31.916 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:37:31.916 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:37:33.301 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:37:33.562 00:37:33.562 real 0m20.021s 00:37:33.562 user 0m9.769s 00:37:33.562 sys 0m6.025s 00:37:33.563 04:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:33.563 04:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:33.563 ************************************ 00:37:33.563 END TEST kernel_target_abort 00:37:33.563 ************************************ 00:37:33.563 04:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:37:33.563 04:48:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:37:33.563 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:33.563 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:33.824 rmmod nvme_tcp 00:37:33.824 rmmod nvme_fabrics 00:37:33.824 rmmod nvme_keyring 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 3310315 ']' 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 3310315 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 3310315 ']' 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 3310315 00:37:33.824 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (3310315) - No such process 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 3310315 is not found' 00:37:33.824 Process with pid 3310315 is not found 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:37:33.824 04:48:47 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:37.126 Waiting for block devices as requested 00:37:37.126 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:37.126 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:37.126 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:37.126 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:37.126 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:37.386 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:37.386 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:37.386 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:37.646 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:37:37.646 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:37:37.906 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:37:37.906 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:37:37.906 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:37:37.906 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:37:38.178 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:37:38.178 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:37:38.178 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:37:38.439 04:48:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:40.986 04:48:54 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:40.986 00:37:40.986 real 0m51.740s 00:37:40.986 user 1m4.835s 00:37:40.986 sys 0m18.755s 00:37:40.986 04:48:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:40.986 04:48:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:40.986 ************************************ 00:37:40.986 END TEST nvmf_abort_qd_sizes 00:37:40.986 ************************************ 00:37:40.986 04:48:54 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:40.986 04:48:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:37:40.986 04:48:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:40.986 04:48:54 -- common/autotest_common.sh@10 -- # set +x 00:37:40.986 ************************************ 00:37:40.986 START TEST keyring_file 00:37:40.986 ************************************ 00:37:40.986 04:48:54 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:37:40.986 * Looking for test storage... 00:37:40.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:40.986 04:48:54 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:40.986 04:48:54 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:37:40.986 04:48:54 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:40.986 04:48:54 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@345 -- # : 1 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@353 -- # local d=1 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@355 -- # echo 1 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@353 -- # local d=2 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@355 -- # echo 2 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:40.986 04:48:54 keyring_file -- scripts/common.sh@368 -- # return 0 00:37:40.986 04:48:54 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:40.986 04:48:54 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:40.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.986 --rc genhtml_branch_coverage=1 00:37:40.986 --rc genhtml_function_coverage=1 00:37:40.986 --rc genhtml_legend=1 00:37:40.986 --rc geninfo_all_blocks=1 00:37:40.986 --rc geninfo_unexecuted_blocks=1 00:37:40.986 00:37:40.986 ' 00:37:40.986 04:48:54 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:40.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.986 --rc genhtml_branch_coverage=1 00:37:40.986 --rc genhtml_function_coverage=1 00:37:40.986 --rc genhtml_legend=1 00:37:40.986 --rc geninfo_all_blocks=1 00:37:40.986 --rc geninfo_unexecuted_blocks=1 00:37:40.986 00:37:40.986 ' 00:37:40.986 04:48:54 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:40.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.986 --rc genhtml_branch_coverage=1 00:37:40.986 --rc genhtml_function_coverage=1 00:37:40.986 --rc genhtml_legend=1 00:37:40.986 --rc geninfo_all_blocks=1 00:37:40.986 --rc geninfo_unexecuted_blocks=1 00:37:40.986 00:37:40.986 ' 00:37:40.987 04:48:54 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:40.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:40.987 --rc genhtml_branch_coverage=1 00:37:40.987 --rc genhtml_function_coverage=1 00:37:40.987 --rc genhtml_legend=1 00:37:40.987 --rc geninfo_all_blocks=1 00:37:40.987 --rc geninfo_unexecuted_blocks=1 00:37:40.987 00:37:40.987 ' 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:40.987 04:48:54 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:37:40.987 04:48:54 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:40.987 04:48:54 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:40.987 04:48:54 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:40.987 04:48:54 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.987 04:48:54 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.987 04:48:54 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.987 04:48:54 keyring_file -- paths/export.sh@5 -- # export PATH 00:37:40.987 04:48:54 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@51 -- # : 0 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:40.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.5XeZoCUe6K 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.5XeZoCUe6K 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.5XeZoCUe6K 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.5XeZoCUe6K 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@17 -- # name=key1 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1cqjMOGtlQ 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:40.987 04:48:54 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1cqjMOGtlQ 00:37:40.987 04:48:54 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1cqjMOGtlQ 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.1cqjMOGtlQ 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@30 -- # tgtpid=3320373 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3320373 00:37:40.987 04:48:54 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:40.987 04:48:54 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3320373 ']' 00:37:40.987 04:48:54 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.987 04:48:54 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:40.987 04:48:54 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.987 04:48:54 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:40.987 04:48:54 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:40.987 [2024-11-05 04:48:54.601975] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:37:40.987 [2024-11-05 04:48:54.602034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320373 ] 00:37:41.249 [2024-11-05 04:48:54.671268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:41.249 [2024-11-05 04:48:54.707503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:37:41.821 04:48:55 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:41.821 [2024-11-05 04:48:55.372927] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:41.821 null0 00:37:41.821 [2024-11-05 04:48:55.404970] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:41.821 [2024-11-05 04:48:55.405244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:41.821 04:48:55 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:41.821 [2024-11-05 04:48:55.437040] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:37:41.821 request: 00:37:41.821 { 00:37:41.821 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:37:41.821 "secure_channel": false, 00:37:41.821 "listen_address": { 00:37:41.821 "trtype": "tcp", 00:37:41.821 "traddr": "127.0.0.1", 00:37:41.821 "trsvcid": "4420" 00:37:41.821 }, 00:37:41.821 "method": "nvmf_subsystem_add_listener", 00:37:41.821 "req_id": 1 00:37:41.821 } 00:37:41.821 Got JSON-RPC error response 00:37:41.821 response: 00:37:41.821 { 00:37:41.821 "code": -32602, 00:37:41.821 "message": "Invalid parameters" 00:37:41.821 } 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:41.821 04:48:55 keyring_file -- keyring/file.sh@47 -- # bperfpid=3320531 00:37:41.821 04:48:55 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3320531 /var/tmp/bperf.sock 00:37:41.821 04:48:55 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3320531 ']' 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:41.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:41.821 04:48:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:42.081 [2024-11-05 04:48:55.494439] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:37:42.081 [2024-11-05 04:48:55.494488] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3320531 ] 00:37:42.081 [2024-11-05 04:48:55.583315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:42.081 [2024-11-05 04:48:55.619315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:42.653 04:48:56 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:42.653 04:48:56 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:37:42.653 04:48:56 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5XeZoCUe6K 00:37:42.653 04:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5XeZoCUe6K 00:37:42.914 04:48:56 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1cqjMOGtlQ 00:37:42.914 04:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1cqjMOGtlQ 00:37:43.174 04:48:56 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:37:43.174 04:48:56 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:37:43.174 04:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.174 04:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.174 04:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:43.174 04:48:56 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.5XeZoCUe6K == \/\t\m\p\/\t\m\p\.\5\X\e\Z\o\C\U\e\6\K ]] 00:37:43.174 04:48:56 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:37:43.174 04:48:56 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:37:43.174 04:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.174 04:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:43.174 04:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.435 04:48:56 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.1cqjMOGtlQ == \/\t\m\p\/\t\m\p\.\1\c\q\j\M\O\G\t\l\Q ]] 00:37:43.435 04:48:56 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:37:43.435 04:48:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:43.435 04:48:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:43.435 04:48:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.435 04:48:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.435 04:48:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:43.696 04:48:57 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:37:43.696 04:48:57 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:37:43.696 04:48:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:43.696 04:48:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:43.696 04:48:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:43.696 04:48:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.696 04:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:43.696 04:48:57 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:37:43.696 04:48:57 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:43.696 04:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:43.956 [2024-11-05 04:48:57.437788] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:43.956 nvme0n1 00:37:43.957 04:48:57 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:37:43.957 04:48:57 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:43.957 04:48:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:43.957 04:48:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:43.957 04:48:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:43.957 04:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.217 04:48:57 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:37:44.217 04:48:57 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:37:44.217 04:48:57 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:44.217 04:48:57 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:44.217 04:48:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:44.217 04:48:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:44.217 04:48:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:44.477 04:48:57 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:37:44.478 04:48:57 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:44.478 Running I/O for 1 seconds... 00:37:45.421 15231.00 IOPS, 59.50 MiB/s 00:37:45.421 Latency(us) 00:37:45.421 [2024-11-05T03:48:59.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:45.421 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:45.421 nvme0n1 : 1.01 15248.73 59.57 0.00 0.00 8361.63 5652.48 15728.64 00:37:45.421 [2024-11-05T03:48:59.061Z] =================================================================================================================== 00:37:45.421 [2024-11-05T03:48:59.061Z] Total : 15248.73 59.57 0.00 0.00 8361.63 5652.48 15728.64 00:37:45.421 { 00:37:45.421 "results": [ 00:37:45.421 { 00:37:45.421 "job": "nvme0n1", 00:37:45.421 "core_mask": "0x2", 00:37:45.421 "workload": "randrw", 00:37:45.421 "percentage": 50, 00:37:45.421 "status": "finished", 00:37:45.421 "queue_depth": 128, 00:37:45.421 "io_size": 4096, 00:37:45.421 "runtime": 1.007297, 00:37:45.421 "iops": 15248.730017065473, 00:37:45.421 "mibps": 59.565351629162, 00:37:45.421 "io_failed": 0, 00:37:45.421 "io_timeout": 0, 00:37:45.421 "avg_latency_us": 8361.634666666667, 00:37:45.421 "min_latency_us": 5652.48, 00:37:45.421 "max_latency_us": 15728.64 00:37:45.421 } 00:37:45.421 ], 00:37:45.421 "core_count": 1 00:37:45.421 } 00:37:45.421 04:48:59 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:45.421 04:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:45.733 04:48:59 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:37:45.733 04:48:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:45.733 04:48:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:45.733 04:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:45.733 04:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:45.733 04:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:46.042 04:48:59 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:46.042 04:48:59 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:37:46.042 04:48:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:46.042 04:48:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:46.042 04:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:46.042 04:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:46.042 04:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:46.042 04:48:59 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:37:46.042 04:48:59 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:46.042 04:48:59 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:46.042 04:48:59 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:46.042 04:48:59 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:46.042 04:48:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:46.042 04:48:59 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:46.042 04:48:59 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:46.042 04:48:59 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:46.042 04:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:46.326 [2024-11-05 04:48:59.708978] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:46.326 [2024-11-05 04:48:59.709706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212b870 (107): Transport endpoint is not connected 00:37:46.326 [2024-11-05 04:48:59.710701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212b870 (9): Bad file descriptor 00:37:46.326 [2024-11-05 04:48:59.711703] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:46.326 [2024-11-05 04:48:59.711710] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:46.326 [2024-11-05 04:48:59.711716] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:46.326 [2024-11-05 04:48:59.711722] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:46.326 request: 00:37:46.326 { 00:37:46.326 "name": "nvme0", 00:37:46.326 "trtype": "tcp", 00:37:46.326 "traddr": "127.0.0.1", 00:37:46.326 "adrfam": "ipv4", 00:37:46.326 "trsvcid": "4420", 00:37:46.326 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:46.326 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:46.326 "prchk_reftag": false, 00:37:46.326 "prchk_guard": false, 00:37:46.326 "hdgst": false, 00:37:46.326 "ddgst": false, 00:37:46.326 "psk": "key1", 00:37:46.326 "allow_unrecognized_csi": false, 00:37:46.326 "method": "bdev_nvme_attach_controller", 00:37:46.326 "req_id": 1 00:37:46.326 } 00:37:46.326 Got JSON-RPC error response 00:37:46.326 response: 00:37:46.326 { 00:37:46.326 "code": -5, 00:37:46.326 "message": "Input/output error" 00:37:46.326 } 00:37:46.326 04:48:59 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:46.326 04:48:59 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:46.326 04:48:59 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:46.326 04:48:59 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:46.326 04:48:59 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:46.326 04:48:59 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:46.326 04:48:59 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:46.326 04:48:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:46.586 04:49:00 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:37:46.586 04:49:00 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:37:46.586 04:49:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:46.847 04:49:00 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:37:46.847 04:49:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:46.847 04:49:00 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:37:46.847 04:49:00 keyring_file -- keyring/file.sh@78 -- # jq length 00:37:46.847 04:49:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:47.107 04:49:00 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:37:47.107 04:49:00 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.5XeZoCUe6K 00:37:47.107 04:49:00 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.5XeZoCUe6K 00:37:47.107 04:49:00 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:47.107 04:49:00 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.5XeZoCUe6K 00:37:47.107 04:49:00 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:47.107 04:49:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:47.107 04:49:00 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:47.107 04:49:00 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:47.107 04:49:00 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5XeZoCUe6K 00:37:47.107 04:49:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5XeZoCUe6K 00:37:47.368 [2024-11-05 04:49:00.760438] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5XeZoCUe6K': 0100660 00:37:47.368 [2024-11-05 04:49:00.760459] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:47.368 request: 00:37:47.368 { 00:37:47.368 "name": "key0", 00:37:47.368 "path": "/tmp/tmp.5XeZoCUe6K", 00:37:47.368 "method": "keyring_file_add_key", 00:37:47.368 "req_id": 1 00:37:47.368 } 00:37:47.368 Got JSON-RPC error response 00:37:47.368 response: 00:37:47.368 { 00:37:47.368 "code": -1, 00:37:47.368 "message": "Operation not permitted" 00:37:47.368 } 00:37:47.368 04:49:00 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:47.368 04:49:00 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:47.368 04:49:00 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:47.368 04:49:00 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:47.368 04:49:00 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.5XeZoCUe6K 00:37:47.368 04:49:00 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.5XeZoCUe6K 00:37:47.368 04:49:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.5XeZoCUe6K 00:37:47.368 04:49:00 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.5XeZoCUe6K 00:37:47.368 04:49:00 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:37:47.368 04:49:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:47.368 04:49:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:47.368 04:49:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:47.368 04:49:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:47.368 04:49:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:47.628 04:49:01 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:37:47.628 04:49:01 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:47.628 04:49:01 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:37:47.628 04:49:01 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:47.628 04:49:01 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:47.628 04:49:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:47.628 04:49:01 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:47.628 04:49:01 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:47.628 04:49:01 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:47.628 04:49:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:47.889 [2024-11-05 04:49:01.301813] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.5XeZoCUe6K': No such file or directory 00:37:47.889 [2024-11-05 04:49:01.301826] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:47.889 [2024-11-05 04:49:01.301840] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:47.889 [2024-11-05 04:49:01.301845] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:37:47.889 [2024-11-05 04:49:01.301851] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:47.889 [2024-11-05 04:49:01.301856] bdev_nvme.c:6576:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:47.889 request: 00:37:47.889 { 00:37:47.889 "name": "nvme0", 00:37:47.889 "trtype": "tcp", 00:37:47.889 "traddr": "127.0.0.1", 00:37:47.889 "adrfam": "ipv4", 00:37:47.889 "trsvcid": "4420", 00:37:47.889 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:47.889 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:47.889 "prchk_reftag": false, 00:37:47.889 "prchk_guard": false, 00:37:47.889 "hdgst": false, 00:37:47.889 "ddgst": false, 00:37:47.889 "psk": "key0", 00:37:47.889 "allow_unrecognized_csi": false, 00:37:47.889 "method": "bdev_nvme_attach_controller", 00:37:47.889 "req_id": 1 00:37:47.889 } 00:37:47.889 Got JSON-RPC error response 00:37:47.889 response: 00:37:47.889 { 00:37:47.889 "code": -19, 00:37:47.889 "message": "No such device" 00:37:47.889 } 00:37:47.889 04:49:01 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:37:47.889 04:49:01 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:47.889 04:49:01 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:47.889 04:49:01 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:47.889 04:49:01 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:37:47.889 04:49:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:47.889 04:49:01 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:47.889 04:49:01 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:47.889 04:49:01 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:47.889 04:49:01 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:47.889 04:49:01 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:47.889 04:49:01 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:48.150 04:49:01 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.1htiZVB4jz 00:37:48.150 04:49:01 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:48.150 04:49:01 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:48.150 04:49:01 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:37:48.150 04:49:01 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:48.150 04:49:01 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:48.150 04:49:01 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:37:48.150 04:49:01 keyring_file -- nvmf/common.sh@733 -- # python - 00:37:48.150 04:49:01 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.1htiZVB4jz 00:37:48.150 04:49:01 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.1htiZVB4jz 00:37:48.150 04:49:01 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.1htiZVB4jz 00:37:48.150 04:49:01 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1htiZVB4jz 00:37:48.150 04:49:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1htiZVB4jz 00:37:48.150 04:49:01 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:48.150 04:49:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:48.410 nvme0n1 00:37:48.410 04:49:01 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:37:48.410 04:49:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:48.410 04:49:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:48.410 04:49:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:48.410 04:49:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:48.410 04:49:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:48.670 04:49:02 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:37:48.670 04:49:02 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:37:48.670 04:49:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:48.931 04:49:02 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:37:48.931 04:49:02 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:37:48.931 04:49:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:48.931 04:49:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:48.931 04:49:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:48.931 04:49:02 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:37:48.931 04:49:02 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:37:48.931 04:49:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:48.931 04:49:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:48.931 04:49:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:48.931 04:49:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:48.931 04:49:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:49.191 04:49:02 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:37:49.191 04:49:02 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:49.191 04:49:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:49.191 04:49:02 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:37:49.191 04:49:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:49.191 04:49:02 keyring_file -- keyring/file.sh@105 -- # jq length 00:37:49.452 04:49:02 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:37:49.452 04:49:02 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.1htiZVB4jz 00:37:49.452 04:49:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.1htiZVB4jz 00:37:49.713 04:49:03 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.1cqjMOGtlQ 00:37:49.713 04:49:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.1cqjMOGtlQ 00:37:49.713 04:49:03 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:49.713 04:49:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:49.974 nvme0n1 00:37:49.974 04:49:03 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:37:49.974 04:49:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:50.234 04:49:03 keyring_file -- keyring/file.sh@113 -- # config='{ 00:37:50.234 "subsystems": [ 00:37:50.234 { 00:37:50.234 "subsystem": "keyring", 00:37:50.234 "config": [ 00:37:50.234 { 00:37:50.234 "method": "keyring_file_add_key", 00:37:50.234 "params": { 00:37:50.234 "name": "key0", 00:37:50.234 "path": "/tmp/tmp.1htiZVB4jz" 00:37:50.234 } 00:37:50.234 }, 00:37:50.234 { 00:37:50.234 "method": "keyring_file_add_key", 00:37:50.234 "params": { 00:37:50.234 "name": "key1", 00:37:50.235 "path": "/tmp/tmp.1cqjMOGtlQ" 00:37:50.235 } 00:37:50.235 } 00:37:50.235 ] 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "subsystem": "iobuf", 00:37:50.235 "config": [ 00:37:50.235 { 00:37:50.235 "method": "iobuf_set_options", 00:37:50.235 "params": { 00:37:50.235 "small_pool_count": 8192, 00:37:50.235 "large_pool_count": 1024, 00:37:50.235 "small_bufsize": 8192, 00:37:50.235 "large_bufsize": 135168, 00:37:50.235 "enable_numa": false 00:37:50.235 } 00:37:50.235 } 00:37:50.235 ] 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "subsystem": "sock", 00:37:50.235 "config": [ 00:37:50.235 { 00:37:50.235 "method": "sock_set_default_impl", 00:37:50.235 "params": { 00:37:50.235 "impl_name": "posix" 00:37:50.235 } 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "method": "sock_impl_set_options", 00:37:50.235 "params": { 00:37:50.235 "impl_name": "ssl", 00:37:50.235 "recv_buf_size": 4096, 00:37:50.235 "send_buf_size": 4096, 00:37:50.235 "enable_recv_pipe": true, 00:37:50.235 "enable_quickack": false, 00:37:50.235 "enable_placement_id": 0, 00:37:50.235 "enable_zerocopy_send_server": true, 00:37:50.235 "enable_zerocopy_send_client": false, 00:37:50.235 "zerocopy_threshold": 0, 00:37:50.235 "tls_version": 0, 00:37:50.235 "enable_ktls": false 00:37:50.235 } 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "method": "sock_impl_set_options", 00:37:50.235 "params": { 00:37:50.235 "impl_name": "posix", 00:37:50.235 "recv_buf_size": 2097152, 00:37:50.235 "send_buf_size": 2097152, 00:37:50.235 "enable_recv_pipe": true, 00:37:50.235 "enable_quickack": false, 00:37:50.235 "enable_placement_id": 0, 00:37:50.235 "enable_zerocopy_send_server": true, 00:37:50.235 "enable_zerocopy_send_client": false, 00:37:50.235 "zerocopy_threshold": 0, 00:37:50.235 "tls_version": 0, 00:37:50.235 "enable_ktls": false 00:37:50.235 } 00:37:50.235 } 00:37:50.235 ] 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "subsystem": "vmd", 00:37:50.235 "config": [] 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "subsystem": "accel", 00:37:50.235 "config": [ 00:37:50.235 { 00:37:50.235 "method": "accel_set_options", 00:37:50.235 "params": { 00:37:50.235 "small_cache_size": 128, 00:37:50.235 "large_cache_size": 16, 00:37:50.235 "task_count": 2048, 00:37:50.235 "sequence_count": 2048, 00:37:50.235 "buf_count": 2048 00:37:50.235 } 00:37:50.235 } 00:37:50.235 ] 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "subsystem": "bdev", 00:37:50.235 "config": [ 00:37:50.235 { 00:37:50.235 "method": "bdev_set_options", 00:37:50.235 "params": { 00:37:50.235 "bdev_io_pool_size": 65535, 00:37:50.235 "bdev_io_cache_size": 256, 00:37:50.235 "bdev_auto_examine": true, 00:37:50.235 "iobuf_small_cache_size": 128, 00:37:50.235 "iobuf_large_cache_size": 16 00:37:50.235 } 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "method": "bdev_raid_set_options", 00:37:50.235 "params": { 00:37:50.235 "process_window_size_kb": 1024, 00:37:50.235 "process_max_bandwidth_mb_sec": 0 00:37:50.235 } 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "method": "bdev_iscsi_set_options", 00:37:50.235 "params": { 00:37:50.235 "timeout_sec": 30 00:37:50.235 } 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "method": "bdev_nvme_set_options", 00:37:50.235 "params": { 00:37:50.235 "action_on_timeout": "none", 00:37:50.235 "timeout_us": 0, 00:37:50.235 "timeout_admin_us": 0, 00:37:50.235 "keep_alive_timeout_ms": 10000, 00:37:50.235 "arbitration_burst": 0, 00:37:50.235 "low_priority_weight": 0, 00:37:50.235 "medium_priority_weight": 0, 00:37:50.235 "high_priority_weight": 0, 00:37:50.235 "nvme_adminq_poll_period_us": 10000, 00:37:50.235 "nvme_ioq_poll_period_us": 0, 00:37:50.235 "io_queue_requests": 512, 00:37:50.235 "delay_cmd_submit": true, 00:37:50.235 "transport_retry_count": 4, 00:37:50.235 "bdev_retry_count": 3, 00:37:50.235 "transport_ack_timeout": 0, 00:37:50.235 "ctrlr_loss_timeout_sec": 0, 00:37:50.235 "reconnect_delay_sec": 0, 00:37:50.235 "fast_io_fail_timeout_sec": 0, 00:37:50.235 "disable_auto_failback": false, 00:37:50.235 "generate_uuids": false, 00:37:50.235 "transport_tos": 0, 00:37:50.235 "nvme_error_stat": false, 00:37:50.235 "rdma_srq_size": 0, 00:37:50.235 "io_path_stat": false, 00:37:50.235 "allow_accel_sequence": false, 00:37:50.235 "rdma_max_cq_size": 0, 00:37:50.235 "rdma_cm_event_timeout_ms": 0, 00:37:50.235 "dhchap_digests": [ 00:37:50.235 "sha256", 00:37:50.235 "sha384", 00:37:50.235 "sha512" 00:37:50.235 ], 00:37:50.235 "dhchap_dhgroups": [ 00:37:50.235 "null", 00:37:50.235 "ffdhe2048", 00:37:50.235 "ffdhe3072", 00:37:50.235 "ffdhe4096", 00:37:50.235 "ffdhe6144", 00:37:50.235 "ffdhe8192" 00:37:50.235 ] 00:37:50.235 } 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "method": "bdev_nvme_attach_controller", 00:37:50.235 "params": { 00:37:50.235 "name": "nvme0", 00:37:50.235 "trtype": "TCP", 00:37:50.235 "adrfam": "IPv4", 00:37:50.235 "traddr": "127.0.0.1", 00:37:50.235 "trsvcid": "4420", 00:37:50.235 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:50.235 "prchk_reftag": false, 00:37:50.235 "prchk_guard": false, 00:37:50.235 "ctrlr_loss_timeout_sec": 0, 00:37:50.235 "reconnect_delay_sec": 0, 00:37:50.235 "fast_io_fail_timeout_sec": 0, 00:37:50.235 "psk": "key0", 00:37:50.235 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:50.235 "hdgst": false, 00:37:50.235 "ddgst": false, 00:37:50.235 "multipath": "multipath" 00:37:50.235 } 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "method": "bdev_nvme_set_hotplug", 00:37:50.235 "params": { 00:37:50.235 "period_us": 100000, 00:37:50.235 "enable": false 00:37:50.235 } 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "method": "bdev_wait_for_examine" 00:37:50.235 } 00:37:50.235 ] 00:37:50.235 }, 00:37:50.235 { 00:37:50.235 "subsystem": "nbd", 00:37:50.235 "config": [] 00:37:50.235 } 00:37:50.235 ] 00:37:50.235 }' 00:37:50.235 04:49:03 keyring_file -- keyring/file.sh@115 -- # killprocess 3320531 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3320531 ']' 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3320531 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@957 -- # uname 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3320531 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3320531' 00:37:50.235 killing process with pid 3320531 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@971 -- # kill 3320531 00:37:50.235 Received shutdown signal, test time was about 1.000000 seconds 00:37:50.235 00:37:50.235 Latency(us) 00:37:50.235 [2024-11-05T03:49:03.875Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.235 [2024-11-05T03:49:03.875Z] =================================================================================================================== 00:37:50.235 [2024-11-05T03:49:03.875Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:50.235 04:49:03 keyring_file -- common/autotest_common.sh@976 -- # wait 3320531 00:37:50.496 04:49:03 keyring_file -- keyring/file.sh@118 -- # bperfpid=3322341 00:37:50.496 04:49:03 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3322341 /var/tmp/bperf.sock 00:37:50.496 04:49:03 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 3322341 ']' 00:37:50.496 04:49:03 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:50.496 04:49:03 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:50.496 04:49:03 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:50.496 04:49:03 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:50.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:50.496 04:49:03 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:50.496 04:49:03 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:50.496 04:49:03 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:37:50.496 "subsystems": [ 00:37:50.496 { 00:37:50.496 "subsystem": "keyring", 00:37:50.496 "config": [ 00:37:50.496 { 00:37:50.496 "method": "keyring_file_add_key", 00:37:50.496 "params": { 00:37:50.496 "name": "key0", 00:37:50.496 "path": "/tmp/tmp.1htiZVB4jz" 00:37:50.496 } 00:37:50.496 }, 00:37:50.496 { 00:37:50.496 "method": "keyring_file_add_key", 00:37:50.496 "params": { 00:37:50.496 "name": "key1", 00:37:50.496 "path": "/tmp/tmp.1cqjMOGtlQ" 00:37:50.496 } 00:37:50.496 } 00:37:50.497 ] 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "subsystem": "iobuf", 00:37:50.497 "config": [ 00:37:50.497 { 00:37:50.497 "method": "iobuf_set_options", 00:37:50.497 "params": { 00:37:50.497 "small_pool_count": 8192, 00:37:50.497 "large_pool_count": 1024, 00:37:50.497 "small_bufsize": 8192, 00:37:50.497 "large_bufsize": 135168, 00:37:50.497 "enable_numa": false 00:37:50.497 } 00:37:50.497 } 00:37:50.497 ] 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "subsystem": "sock", 00:37:50.497 "config": [ 00:37:50.497 { 00:37:50.497 "method": "sock_set_default_impl", 00:37:50.497 "params": { 00:37:50.497 "impl_name": "posix" 00:37:50.497 } 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "method": "sock_impl_set_options", 00:37:50.497 "params": { 00:37:50.497 "impl_name": "ssl", 00:37:50.497 "recv_buf_size": 4096, 00:37:50.497 "send_buf_size": 4096, 00:37:50.497 "enable_recv_pipe": true, 00:37:50.497 "enable_quickack": false, 00:37:50.497 "enable_placement_id": 0, 00:37:50.497 "enable_zerocopy_send_server": true, 00:37:50.497 "enable_zerocopy_send_client": false, 00:37:50.497 "zerocopy_threshold": 0, 00:37:50.497 "tls_version": 0, 00:37:50.497 "enable_ktls": false 00:37:50.497 } 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "method": "sock_impl_set_options", 00:37:50.497 "params": { 00:37:50.497 "impl_name": "posix", 00:37:50.497 "recv_buf_size": 2097152, 00:37:50.497 "send_buf_size": 2097152, 00:37:50.497 "enable_recv_pipe": true, 00:37:50.497 "enable_quickack": false, 00:37:50.497 "enable_placement_id": 0, 00:37:50.497 "enable_zerocopy_send_server": true, 00:37:50.497 "enable_zerocopy_send_client": false, 00:37:50.497 "zerocopy_threshold": 0, 00:37:50.497 "tls_version": 0, 00:37:50.497 "enable_ktls": false 00:37:50.497 } 00:37:50.497 } 00:37:50.497 ] 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "subsystem": "vmd", 00:37:50.497 "config": [] 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "subsystem": "accel", 00:37:50.497 "config": [ 00:37:50.497 { 00:37:50.497 "method": "accel_set_options", 00:37:50.497 "params": { 00:37:50.497 "small_cache_size": 128, 00:37:50.497 "large_cache_size": 16, 00:37:50.497 "task_count": 2048, 00:37:50.497 "sequence_count": 2048, 00:37:50.497 "buf_count": 2048 00:37:50.497 } 00:37:50.497 } 00:37:50.497 ] 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "subsystem": "bdev", 00:37:50.497 "config": [ 00:37:50.497 { 00:37:50.497 "method": "bdev_set_options", 00:37:50.497 "params": { 00:37:50.497 "bdev_io_pool_size": 65535, 00:37:50.497 "bdev_io_cache_size": 256, 00:37:50.497 "bdev_auto_examine": true, 00:37:50.497 "iobuf_small_cache_size": 128, 00:37:50.497 "iobuf_large_cache_size": 16 00:37:50.497 } 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "method": "bdev_raid_set_options", 00:37:50.497 "params": { 00:37:50.497 "process_window_size_kb": 1024, 00:37:50.497 "process_max_bandwidth_mb_sec": 0 00:37:50.497 } 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "method": "bdev_iscsi_set_options", 00:37:50.497 "params": { 00:37:50.497 "timeout_sec": 30 00:37:50.497 } 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "method": "bdev_nvme_set_options", 00:37:50.497 "params": { 00:37:50.497 "action_on_timeout": "none", 00:37:50.497 "timeout_us": 0, 00:37:50.497 "timeout_admin_us": 0, 00:37:50.497 "keep_alive_timeout_ms": 10000, 00:37:50.497 "arbitration_burst": 0, 00:37:50.497 "low_priority_weight": 0, 00:37:50.497 "medium_priority_weight": 0, 00:37:50.497 "high_priority_weight": 0, 00:37:50.497 "nvme_adminq_poll_period_us": 10000, 00:37:50.497 "nvme_ioq_poll_period_us": 0, 00:37:50.497 "io_queue_requests": 512, 00:37:50.497 "delay_cmd_submit": true, 00:37:50.497 "transport_retry_count": 4, 00:37:50.497 "bdev_retry_count": 3, 00:37:50.497 "transport_ack_timeout": 0, 00:37:50.497 "ctrlr_loss_timeout_sec": 0, 00:37:50.497 "reconnect_delay_sec": 0, 00:37:50.497 "fast_io_fail_timeout_sec": 0, 00:37:50.497 "disable_auto_failback": false, 00:37:50.497 "generate_uuids": false, 00:37:50.497 "transport_tos": 0, 00:37:50.497 "nvme_error_stat": false, 00:37:50.497 "rdma_srq_size": 0, 00:37:50.497 "io_path_stat": false, 00:37:50.497 "allow_accel_sequence": false, 00:37:50.497 "rdma_max_cq_size": 0, 00:37:50.497 "rdma_cm_event_timeout_ms": 0, 00:37:50.497 "dhchap_digests": [ 00:37:50.497 "sha256", 00:37:50.497 "sha384", 00:37:50.497 "sha512" 00:37:50.497 ], 00:37:50.497 "dhchap_dhgroups": [ 00:37:50.497 "null", 00:37:50.497 "ffdhe2048", 00:37:50.497 "ffdhe3072", 00:37:50.497 "ffdhe4096", 00:37:50.497 "ffdhe6144", 00:37:50.497 "ffdhe8192" 00:37:50.497 ] 00:37:50.497 } 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "method": "bdev_nvme_attach_controller", 00:37:50.497 "params": { 00:37:50.497 "name": "nvme0", 00:37:50.497 "trtype": "TCP", 00:37:50.497 "adrfam": "IPv4", 00:37:50.497 "traddr": "127.0.0.1", 00:37:50.497 "trsvcid": "4420", 00:37:50.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:50.497 "prchk_reftag": false, 00:37:50.497 "prchk_guard": false, 00:37:50.497 "ctrlr_loss_timeout_sec": 0, 00:37:50.497 "reconnect_delay_sec": 0, 00:37:50.497 "fast_io_fail_timeout_sec": 0, 00:37:50.497 "psk": "key0", 00:37:50.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:50.497 "hdgst": false, 00:37:50.497 "ddgst": false, 00:37:50.497 "multipath": "multipath" 00:37:50.497 } 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "method": "bdev_nvme_set_hotplug", 00:37:50.497 "params": { 00:37:50.497 "period_us": 100000, 00:37:50.497 "enable": false 00:37:50.497 } 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "method": "bdev_wait_for_examine" 00:37:50.497 } 00:37:50.497 ] 00:37:50.497 }, 00:37:50.497 { 00:37:50.497 "subsystem": "nbd", 00:37:50.497 "config": [] 00:37:50.497 } 00:37:50.497 ] 00:37:50.497 }' 00:37:50.497 [2024-11-05 04:49:04.009135] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:37:50.497 [2024-11-05 04:49:04.009194] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322341 ] 00:37:50.497 [2024-11-05 04:49:04.092559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.497 [2024-11-05 04:49:04.121717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:50.758 [2024-11-05 04:49:04.264527] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:51.330 04:49:04 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:51.330 04:49:04 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:37:51.330 04:49:04 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:37:51.330 04:49:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:51.330 04:49:04 keyring_file -- keyring/file.sh@121 -- # jq length 00:37:51.591 04:49:04 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:51.591 04:49:04 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:37:51.591 04:49:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:51.591 04:49:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:51.591 04:49:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:51.591 04:49:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:51.591 04:49:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:51.591 04:49:05 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:37:51.591 04:49:05 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:37:51.591 04:49:05 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:51.591 04:49:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:51.591 04:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:51.591 04:49:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:51.591 04:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:51.852 04:49:05 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:37:51.852 04:49:05 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:37:51.852 04:49:05 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:37:51.852 04:49:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:52.112 04:49:05 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:37:52.112 04:49:05 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:52.112 04:49:05 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.1htiZVB4jz /tmp/tmp.1cqjMOGtlQ 00:37:52.112 04:49:05 keyring_file -- keyring/file.sh@20 -- # killprocess 3322341 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3322341 ']' 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3322341 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@957 -- # uname 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3322341 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3322341' 00:37:52.112 killing process with pid 3322341 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@971 -- # kill 3322341 00:37:52.112 Received shutdown signal, test time was about 1.000000 seconds 00:37:52.112 00:37:52.112 Latency(us) 00:37:52.112 [2024-11-05T03:49:05.752Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:52.112 [2024-11-05T03:49:05.752Z] =================================================================================================================== 00:37:52.112 [2024-11-05T03:49:05.752Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@976 -- # wait 3322341 00:37:52.112 04:49:05 keyring_file -- keyring/file.sh@21 -- # killprocess 3320373 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 3320373 ']' 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@956 -- # kill -0 3320373 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@957 -- # uname 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3320373 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3320373' 00:37:52.112 killing process with pid 3320373 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@971 -- # kill 3320373 00:37:52.112 04:49:05 keyring_file -- common/autotest_common.sh@976 -- # wait 3320373 00:37:52.373 00:37:52.373 real 0m11.763s 00:37:52.373 user 0m28.264s 00:37:52.373 sys 0m2.638s 00:37:52.373 04:49:05 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:52.373 04:49:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:52.373 ************************************ 00:37:52.373 END TEST keyring_file 00:37:52.373 ************************************ 00:37:52.373 04:49:05 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:37:52.373 04:49:05 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:52.373 04:49:05 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:37:52.373 04:49:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:37:52.373 04:49:05 -- common/autotest_common.sh@10 -- # set +x 00:37:52.635 ************************************ 00:37:52.635 START TEST keyring_linux 00:37:52.635 ************************************ 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:52.635 Joined session keyring: 972003821 00:37:52.635 * Looking for test storage... 00:37:52.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@345 -- # : 1 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:52.635 04:49:06 keyring_linux -- scripts/common.sh@368 -- # return 0 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:37:52.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.635 --rc genhtml_branch_coverage=1 00:37:52.635 --rc genhtml_function_coverage=1 00:37:52.635 --rc genhtml_legend=1 00:37:52.635 --rc geninfo_all_blocks=1 00:37:52.635 --rc geninfo_unexecuted_blocks=1 00:37:52.635 00:37:52.635 ' 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:37:52.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.635 --rc genhtml_branch_coverage=1 00:37:52.635 --rc genhtml_function_coverage=1 00:37:52.635 --rc genhtml_legend=1 00:37:52.635 --rc geninfo_all_blocks=1 00:37:52.635 --rc geninfo_unexecuted_blocks=1 00:37:52.635 00:37:52.635 ' 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:37:52.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.635 --rc genhtml_branch_coverage=1 00:37:52.635 --rc genhtml_function_coverage=1 00:37:52.635 --rc genhtml_legend=1 00:37:52.635 --rc geninfo_all_blocks=1 00:37:52.635 --rc geninfo_unexecuted_blocks=1 00:37:52.635 00:37:52.635 ' 00:37:52.635 04:49:06 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:37:52.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.636 --rc genhtml_branch_coverage=1 00:37:52.636 --rc genhtml_function_coverage=1 00:37:52.636 --rc genhtml_legend=1 00:37:52.636 --rc geninfo_all_blocks=1 00:37:52.636 --rc geninfo_unexecuted_blocks=1 00:37:52.636 00:37:52.636 ' 00:37:52.636 04:49:06 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:52.636 04:49:06 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:52.636 04:49:06 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:37:52.636 04:49:06 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:52.636 04:49:06 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:52.636 04:49:06 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:52.636 04:49:06 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.636 04:49:06 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.636 04:49:06 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.636 04:49:06 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:52.636 04:49:06 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:37:52.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:52.636 04:49:06 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:52.636 04:49:06 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:52.636 04:49:06 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:52.636 04:49:06 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:52.636 04:49:06 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:52.636 04:49:06 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:52.636 04:49:06 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:52.636 04:49:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:52.636 04:49:06 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:52.636 04:49:06 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:52.636 04:49:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:52.636 04:49:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:52.636 04:49:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:52.636 04:49:06 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:52.898 /tmp/:spdk-test:key0 00:37:52.898 04:49:06 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:52.898 04:49:06 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:52.898 04:49:06 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:37:52.898 04:49:06 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:37:52.898 04:49:06 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:37:52.898 04:49:06 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:37:52.898 04:49:06 keyring_linux -- nvmf/common.sh@733 -- # python - 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:52.898 04:49:06 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:52.898 /tmp/:spdk-test:key1 00:37:52.898 04:49:06 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:52.898 04:49:06 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3322789 00:37:52.898 04:49:06 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3322789 00:37:52.898 04:49:06 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3322789 ']' 00:37:52.898 04:49:06 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:52.898 04:49:06 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:52.898 04:49:06 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:52.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:52.898 04:49:06 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:52.898 04:49:06 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:52.898 [2024-11-05 04:49:06.418854] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:37:52.898 [2024-11-05 04:49:06.418950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322789 ] 00:37:52.898 [2024-11-05 04:49:06.495380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.160 [2024-11-05 04:49:06.536872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:37:53.731 04:49:07 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:53.731 [2024-11-05 04:49:07.221380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:53.731 null0 00:37:53.731 [2024-11-05 04:49:07.253434] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:53.731 [2024-11-05 04:49:07.253829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:37:53.731 04:49:07 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:53.731 88316104 00:37:53.731 04:49:07 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:53.731 365763594 00:37:53.731 04:49:07 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3322972 00:37:53.731 04:49:07 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3322972 /var/tmp/bperf.sock 00:37:53.731 04:49:07 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 3322972 ']' 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:53.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:37:53.731 04:49:07 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:53.731 [2024-11-05 04:49:07.332241] Starting SPDK v25.01-pre git sha1 d0fd7ad59 / DPDK 24.03.0 initialization... 00:37:53.731 [2024-11-05 04:49:07.332291] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322972 ] 00:37:53.992 [2024-11-05 04:49:07.413712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:53.992 [2024-11-05 04:49:07.443577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:54.562 04:49:08 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:37:54.562 04:49:08 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:37:54.562 04:49:08 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:54.562 04:49:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:54.823 04:49:08 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:54.823 04:49:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:55.083 04:49:08 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:55.083 04:49:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:55.083 [2024-11-05 04:49:08.671264] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:55.344 nvme0n1 00:37:55.344 04:49:08 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:55.344 04:49:08 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:55.344 04:49:08 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:55.344 04:49:08 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:55.344 04:49:08 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:55.344 04:49:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.344 04:49:08 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:55.344 04:49:08 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:55.344 04:49:08 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:55.344 04:49:08 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:55.344 04:49:08 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:55.344 04:49:08 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:55.344 04:49:08 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:55.604 04:49:09 keyring_linux -- keyring/linux.sh@25 -- # sn=88316104 00:37:55.604 04:49:09 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:55.604 04:49:09 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:55.604 04:49:09 keyring_linux -- keyring/linux.sh@26 -- # [[ 88316104 == \8\8\3\1\6\1\0\4 ]] 00:37:55.604 04:49:09 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 88316104 00:37:55.604 04:49:09 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:55.604 04:49:09 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:55.604 Running I/O for 1 seconds... 00:37:56.987 16457.00 IOPS, 64.29 MiB/s 00:37:56.987 Latency(us) 00:37:56.987 [2024-11-05T03:49:10.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:56.987 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:56.987 nvme0n1 : 1.01 16457.47 64.29 0.00 0.00 7744.77 5434.03 12233.39 00:37:56.987 [2024-11-05T03:49:10.627Z] =================================================================================================================== 00:37:56.987 [2024-11-05T03:49:10.627Z] Total : 16457.47 64.29 0.00 0.00 7744.77 5434.03 12233.39 00:37:56.987 { 00:37:56.987 "results": [ 00:37:56.987 { 00:37:56.987 "job": "nvme0n1", 00:37:56.987 "core_mask": "0x2", 00:37:56.987 "workload": "randread", 00:37:56.987 "status": "finished", 00:37:56.987 "queue_depth": 128, 00:37:56.987 "io_size": 4096, 00:37:56.987 "runtime": 1.007749, 00:37:56.987 "iops": 16457.471056781003, 00:37:56.987 "mibps": 64.2869963155508, 00:37:56.987 "io_failed": 0, 00:37:56.987 "io_timeout": 0, 00:37:56.987 "avg_latency_us": 7744.765041503366, 00:37:56.987 "min_latency_us": 5434.026666666667, 00:37:56.987 "max_latency_us": 12233.386666666667 00:37:56.987 } 00:37:56.987 ], 00:37:56.987 "core_count": 1 00:37:56.987 } 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:56.987 04:49:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:56.987 04:49:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:56.987 04:49:10 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:56.987 04:49:10 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:37:56.987 04:49:10 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:56.987 04:49:10 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:37:56.987 04:49:10 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:56.987 04:49:10 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:37:56.987 04:49:10 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:37:56.987 04:49:10 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:56.987 04:49:10 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:57.248 [2024-11-05 04:49:10.774235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:57.248 [2024-11-05 04:49:10.774312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d59b0 (107): Transport endpoint is not connected 00:37:57.248 [2024-11-05 04:49:10.775308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d59b0 (9): Bad file descriptor 00:37:57.248 [2024-11-05 04:49:10.776310] nvme_ctrlr.c:4170:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:37:57.248 [2024-11-05 04:49:10.776318] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:57.248 [2024-11-05 04:49:10.776324] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:37:57.248 [2024-11-05 04:49:10.776331] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:37:57.248 request: 00:37:57.248 { 00:37:57.248 "name": "nvme0", 00:37:57.248 "trtype": "tcp", 00:37:57.248 "traddr": "127.0.0.1", 00:37:57.248 "adrfam": "ipv4", 00:37:57.248 "trsvcid": "4420", 00:37:57.248 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:57.248 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:57.248 "prchk_reftag": false, 00:37:57.248 "prchk_guard": false, 00:37:57.248 "hdgst": false, 00:37:57.248 "ddgst": false, 00:37:57.248 "psk": ":spdk-test:key1", 00:37:57.248 "allow_unrecognized_csi": false, 00:37:57.248 "method": "bdev_nvme_attach_controller", 00:37:57.248 "req_id": 1 00:37:57.248 } 00:37:57.248 Got JSON-RPC error response 00:37:57.248 response: 00:37:57.248 { 00:37:57.248 "code": -5, 00:37:57.248 "message": "Input/output error" 00:37:57.248 } 00:37:57.248 04:49:10 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:37:57.248 04:49:10 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:37:57.248 04:49:10 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:37:57.248 04:49:10 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@33 -- # sn=88316104 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 88316104 00:37:57.248 1 links removed 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:57.248 04:49:10 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:57.249 04:49:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:57.249 04:49:10 keyring_linux -- keyring/linux.sh@33 -- # sn=365763594 00:37:57.249 04:49:10 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 365763594 00:37:57.249 1 links removed 00:37:57.249 04:49:10 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3322972 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3322972 ']' 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3322972 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3322972 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3322972' 00:37:57.249 killing process with pid 3322972 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@971 -- # kill 3322972 00:37:57.249 Received shutdown signal, test time was about 1.000000 seconds 00:37:57.249 00:37:57.249 Latency(us) 00:37:57.249 [2024-11-05T03:49:10.889Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:57.249 [2024-11-05T03:49:10.889Z] =================================================================================================================== 00:37:57.249 [2024-11-05T03:49:10.889Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:57.249 04:49:10 keyring_linux -- common/autotest_common.sh@976 -- # wait 3322972 00:37:57.509 04:49:10 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3322789 00:37:57.509 04:49:10 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 3322789 ']' 00:37:57.509 04:49:10 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 3322789 00:37:57.509 04:49:10 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:37:57.509 04:49:10 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:37:57.509 04:49:10 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 3322789 00:37:57.509 04:49:11 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:37:57.509 04:49:11 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:37:57.509 04:49:11 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 3322789' 00:37:57.509 killing process with pid 3322789 00:37:57.509 04:49:11 keyring_linux -- common/autotest_common.sh@971 -- # kill 3322789 00:37:57.509 04:49:11 keyring_linux -- common/autotest_common.sh@976 -- # wait 3322789 00:37:57.769 00:37:57.769 real 0m5.213s 00:37:57.769 user 0m9.660s 00:37:57.769 sys 0m1.426s 00:37:57.769 04:49:11 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:37:57.769 04:49:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:57.769 ************************************ 00:37:57.769 END TEST keyring_linux 00:37:57.769 ************************************ 00:37:57.769 04:49:11 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:37:57.769 04:49:11 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:37:57.769 04:49:11 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:37:57.769 04:49:11 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:37:57.769 04:49:11 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:37:57.769 04:49:11 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:37:57.769 04:49:11 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:37:57.769 04:49:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:37:57.769 04:49:11 -- common/autotest_common.sh@10 -- # set +x 00:37:57.769 04:49:11 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:37:57.769 04:49:11 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:37:57.769 04:49:11 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:37:57.769 04:49:11 -- common/autotest_common.sh@10 -- # set +x 00:38:05.908 INFO: APP EXITING 00:38:05.908 INFO: killing all VMs 00:38:05.908 INFO: killing vhost app 00:38:05.908 INFO: EXIT DONE 00:38:08.456 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:38:08.456 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:38:08.456 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:38:08.456 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:38:08.456 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:38:08.456 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:38:08.456 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:38:08.456 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:38:08.456 0000:65:00.0 (144d a80a): Already using the nvme driver 00:38:08.717 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:38:08.717 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:38:08.717 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:38:08.717 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:38:08.717 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:38:08.717 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:38:08.717 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:38:08.717 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:38:12.014 Cleaning 00:38:12.014 Removing: /var/run/dpdk/spdk0/config 00:38:12.014 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:12.014 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:12.014 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:12.014 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:12.014 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:12.014 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:12.014 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:12.014 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:12.014 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:12.014 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:12.014 Removing: /var/run/dpdk/spdk1/config 00:38:12.014 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:12.014 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:12.014 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:12.014 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:12.014 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:12.014 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:12.014 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:12.014 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:12.014 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:12.014 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:12.014 Removing: /var/run/dpdk/spdk2/config 00:38:12.014 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:12.014 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:12.014 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:12.014 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:12.014 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:12.014 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:12.014 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:12.014 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:12.014 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:12.014 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:12.014 Removing: /var/run/dpdk/spdk3/config 00:38:12.014 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:12.014 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:12.014 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:12.014 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:12.014 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:12.014 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:12.014 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:12.014 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:12.014 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:12.014 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:12.014 Removing: /var/run/dpdk/spdk4/config 00:38:12.014 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:12.014 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:12.014 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:12.014 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:12.014 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:12.014 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:12.014 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:12.014 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:12.014 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:12.014 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:12.014 Removing: /dev/shm/bdev_svc_trace.1 00:38:12.014 Removing: /dev/shm/nvmf_trace.0 00:38:12.014 Removing: /dev/shm/spdk_tgt_trace.pid2752543 00:38:12.014 Removing: /var/run/dpdk/spdk0 00:38:12.014 Removing: /var/run/dpdk/spdk1 00:38:12.014 Removing: /var/run/dpdk/spdk2 00:38:12.014 Removing: /var/run/dpdk/spdk3 00:38:12.014 Removing: /var/run/dpdk/spdk4 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2750857 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2752543 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2753091 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2754299 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2754474 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2755823 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2755857 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2756311 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2757422 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2757917 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2758316 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2758710 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2759120 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2759498 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2759630 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2759907 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2760295 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2761359 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2764749 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2765124 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2765460 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2765691 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2766068 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2766370 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2766774 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2766826 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2767153 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2767487 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2767540 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2767861 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2768328 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2768678 00:38:12.014 Removing: /var/run/dpdk/spdk_pid2769082 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2773595 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2778686 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2790767 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2791540 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2796941 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2797300 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2802813 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2809762 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2812984 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2825453 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2836238 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2838280 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2839484 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2860841 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2865605 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2921427 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2927816 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2934887 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2942681 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2942770 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2943767 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2944805 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2945863 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2946589 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2946595 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2946931 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2946964 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2947109 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2948181 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2949186 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2950274 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2950905 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2950967 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2951307 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2952437 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2953820 00:38:12.275 Removing: /var/run/dpdk/spdk_pid2964388 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3000545 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3005965 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3007967 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3010009 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3010325 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3010340 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3010577 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3011109 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3013389 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3014445 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3014866 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3017591 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3018305 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3019016 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3024078 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3030777 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3030778 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3030779 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3035471 00:38:12.275 Removing: /var/run/dpdk/spdk_pid3045741 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3051093 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3058306 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3059806 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3061360 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3063161 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3068925 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3073904 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3082931 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3083054 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3088072 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3088204 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3088458 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3089096 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3089120 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3094492 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3095080 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3100611 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3104243 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3110812 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3117362 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3127297 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3135959 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3136002 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3160023 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3160844 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3161625 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3162393 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3163451 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3164138 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3164824 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3165507 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3170692 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3170916 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3178141 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3178308 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3184907 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3190052 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3201633 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3202356 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3207917 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3208337 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3213257 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3220203 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3223140 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3235153 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3245807 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3247810 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3248817 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3268985 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3273378 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3276706 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3284293 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3284334 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3290215 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3292449 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3294924 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3296125 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3298637 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3300060 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3310471 00:38:12.536 Removing: /var/run/dpdk/spdk_pid3311022 00:38:12.797 Removing: /var/run/dpdk/spdk_pid3311685 00:38:12.797 Removing: /var/run/dpdk/spdk_pid3314628 00:38:12.797 Removing: /var/run/dpdk/spdk_pid3315111 00:38:12.797 Removing: /var/run/dpdk/spdk_pid3315649 00:38:12.797 Removing: /var/run/dpdk/spdk_pid3320373 00:38:12.797 Removing: /var/run/dpdk/spdk_pid3320531 00:38:12.797 Removing: /var/run/dpdk/spdk_pid3322341 00:38:12.797 Removing: /var/run/dpdk/spdk_pid3322789 00:38:12.797 Removing: /var/run/dpdk/spdk_pid3322972 00:38:12.797 Clean 00:38:12.797 04:49:26 -- common/autotest_common.sh@1451 -- # return 0 00:38:12.797 04:49:26 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:38:12.797 04:49:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:12.797 04:49:26 -- common/autotest_common.sh@10 -- # set +x 00:38:12.797 04:49:26 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:38:12.797 04:49:26 -- common/autotest_common.sh@730 -- # xtrace_disable 00:38:12.797 04:49:26 -- common/autotest_common.sh@10 -- # set +x 00:38:12.798 04:49:26 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:12.798 04:49:26 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:12.798 04:49:26 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:12.798 04:49:26 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:38:12.798 04:49:26 -- spdk/autotest.sh@394 -- # hostname 00:38:12.798 04:49:26 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:13.058 geninfo: WARNING: invalid characters removed from testname! 00:38:39.635 04:49:52 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:41.545 04:49:55 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:43.453 04:49:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:44.835 04:49:58 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:46.756 04:49:59 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:48.137 04:50:01 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:50.047 04:50:03 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:50.047 04:50:03 -- spdk/autorun.sh@1 -- $ timing_finish 00:38:50.047 04:50:03 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:38:50.047 04:50:03 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:50.048 04:50:03 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:38:50.048 04:50:03 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:50.048 + [[ -n 2665732 ]] 00:38:50.048 + sudo kill 2665732 00:38:50.058 [Pipeline] } 00:38:50.074 [Pipeline] // stage 00:38:50.079 [Pipeline] } 00:38:50.094 [Pipeline] // timeout 00:38:50.099 [Pipeline] } 00:38:50.113 [Pipeline] // catchError 00:38:50.118 [Pipeline] } 00:38:50.133 [Pipeline] // wrap 00:38:50.139 [Pipeline] } 00:38:50.152 [Pipeline] // catchError 00:38:50.162 [Pipeline] stage 00:38:50.164 [Pipeline] { (Epilogue) 00:38:50.178 [Pipeline] catchError 00:38:50.180 [Pipeline] { 00:38:50.193 [Pipeline] echo 00:38:50.195 Cleanup processes 00:38:50.201 [Pipeline] sh 00:38:50.490 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:50.490 3335735 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:50.505 [Pipeline] sh 00:38:50.792 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:50.792 ++ grep -v 'sudo pgrep' 00:38:50.792 ++ awk '{print $1}' 00:38:50.792 + sudo kill -9 00:38:50.792 + true 00:38:50.806 [Pipeline] sh 00:38:51.093 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:03.388 [Pipeline] sh 00:39:03.678 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:03.678 Artifacts sizes are good 00:39:03.694 [Pipeline] archiveArtifacts 00:39:03.701 Archiving artifacts 00:39:03.836 [Pipeline] sh 00:39:04.124 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:04.139 [Pipeline] cleanWs 00:39:04.150 [WS-CLEANUP] Deleting project workspace... 00:39:04.150 [WS-CLEANUP] Deferred wipeout is used... 00:39:04.158 [WS-CLEANUP] done 00:39:04.160 [Pipeline] } 00:39:04.176 [Pipeline] // catchError 00:39:04.188 [Pipeline] sh 00:39:04.476 + logger -p user.info -t JENKINS-CI 00:39:04.486 [Pipeline] } 00:39:04.499 [Pipeline] // stage 00:39:04.505 [Pipeline] } 00:39:04.519 [Pipeline] // node 00:39:04.524 [Pipeline] End of Pipeline 00:39:04.560 Finished: SUCCESS